id
stringlengths 10
10
| title
stringlengths 19
145
| abstract
stringlengths 273
1.91k
| full_text
dict | qas
dict | figures_and_tables
dict | question
sequence | retrieval_gt
sequence | answer_gt
sequence | __index_level_0__
int64 0
887
|
---|---|---|---|---|---|---|---|---|---|
1911.12237 | SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization | This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news -- in contrast with human evaluators' judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chat-dialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies. | {
"paragraphs": [
[
"The goal of the summarization task is condensing a piece of text into a shorter version that covers the main points succinctly. In the abstractive approach important pieces of information are presented using words and phrases not necessarily appearing in the source text. This requires natural language generation techniques with high level of semantic understanding BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6.",
"Major research efforts have focused so far on summarization of single-speaker documents like news (e.g., BIBREF7) or scientific publications (e.g., BIBREF8). One of the reasons is the availability of large, high-quality news datasets with annotated summaries, e.g., CNN/Daily Mail BIBREF9, BIBREF7. Such a comprehensive dataset for dialogues is lacking.",
"The challenges posed by the abstractive dialogue summarization task have been discussed in the literature with regard to AMI meeting corpus BIBREF10, e.g. BIBREF11, BIBREF12, BIBREF13. Since the corpus has a low number of summaries (for 141 dialogues), BIBREF13 proposed to use assigned topic descriptions as gold references. These are short, label-like goals of the meeting, e.g., costing evaluation of project process; components, materials and energy sources; chitchat. Such descriptions, however, are very general, lacking the messenger-like structure and any information about the speakers.",
"To benefit from large news corpora, BIBREF14 built a dialogue summarization model that first converts a conversation into a structured text document and later applies an attention-based pointer network to create an abstractive summary. Their model, trained on structured text documents of CNN/Daily Mail dataset, was evaluated on the Argumentative Dialogue Summary Corpus BIBREF15, which, however, contains only 45 dialogues.",
"In the present paper, we further investigate the problem of abstractive dialogue summarization. With the growing popularity of online conversations via applications like Messenger, WhatsApp and WeChat, summarization of chats between a few participants is a new interesting direction of summarization research. For this purpose we have created the SAMSum Corpus which contains over 16k chat dialogues with manually annotated summaries. The dataset is freely available for the research community.",
"The paper is structured as follows: in Section SECREF2 we present details about the new corpus and describe how it was created, validated and cleaned. Brief description of baselines used in the summarization task can be found in Section SECREF3. In Section SECREF4, we describe our experimental setup and parameters of models. Both evaluations of summarization models, the automatic with ROUGE metric and the linguistic one, are reported in Section SECREF5 and Section SECREF6, respectively. Examples of models' outputs and some errors they make are described in Section SECREF7. Finally, discussion, conclusions and ideas for further research are presented in sections SECREF8 and SECREF9."
],
[
"Initial approach. Since there was no available corpus of messenger conversations, we considered two approaches to build it: (1) using existing datasets of documents, which have a form similar to chat conversations, (2) creating such a dataset by linguists.",
"In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.",
"As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.",
"Process of building the dataset. Our dialogue summarization dataset contains natural messenger-like conversations created and written down by linguists fluent in English. The style and register of conversations are diversified – dialogues could be informal, semi-formal or formal, they may contain slang phrases, emoticons and typos. We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.",
"Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary. Validation. Since the SAMSum corpus contains dialogues created by linguists, the question arises whether such conversations are really similar to those typically written via messenger apps. To find the answer, we performed a validation task. We asked two linguists to doubly annotate 50 conversations in order to verify whether the dialogues could appear in a messenger app and could be summarized (i.e. a dialogue is not too general or unintelligible) or not (e.g. a dialogue between two people in a shop). The results revealed that 94% of examined dialogues were classified by both annotators as good i.e. they do look like conversations from a messenger app and could be condensed in a reasonable way. In a similar validation task, conducted for the existing dialogue-type datasets (described in the Initial approach section), the annotators agreed that only 28% of the dialogues resembled conversations from a messenger app.",
"Cleaning data. After preparing the dataset, we conducted a process of cleaning it in a semi-automatic way. Beforehand, we specified a format for written dialogues with summaries: a colon should separate an author of utterance from its content, each utterance is expected to be in a separate line. Therefore, we could easily find all deviations from the agreed structure – some of them could be automatically fixed (e.g. when instead of a colon, someone used a semicolon right after the interlocutor's name at the beginning of an utterance), others were passed for verification to linguists. We also tried to correct typos in interlocutors' names (if one person has several utterances, it happens that, before one of them, there is a typo in his/her name) – we used the Levenshtein distance to find very similar names (possibly with typos e.g. 'George' and 'Goerge') in a single conversation, and those cases with very similar names were passed to linguists for verification.",
"Description. The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in conversations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people. Table TABREF3 presents the size of the dataset split used in our experiments. The example of a dialogue from this corpus is shown in Table TABREF4."
],
[
"The baseline commonly used in the news summarization task is Lead-3 BIBREF4, which takes three leading sentences of the document as the summary. The underlying assumption is that the beginning of the article contains the most significant information. Inspired by the Lead-n model, we propose a few different simple models:",
"MIDDLE-n, which takes n utterances from the middle of the dialogue,",
"LONGEST-n, treating only n longest utterances in order of length as a summary,",
"LONGER-THAN-n, taking only utterances longer than n characters in order of length (if there is no such long utterance in the dialogue, takes the longest one),",
"MOST-ACTIVE-PERSON, which treats all utterances of the most active person in the dialogue as a summary.",
"Results of the evaluation of the above models are reported in Table TABREF9. There is no obvious baseline for the task of dialogues summarization. We expected rather low results for Lead-3, as the beginnings of the conversations usually contain greetings, not the main part of the discourse. However, it seems that in our dataset greetings are frequently combined with question-asking or information passing (sometimes they are even omitted) and such a baseline works even better than the MIDDLE baseline (taking utterances from the middle of a dialogue). Nevertheless, the best dialogue baseline turns out to be the LONGEST-3 model."
],
[
"This section contains a description of setting used in the experiments carried out."
],
[
"In order to build a dialogue summarization model, we adopt the following strategies: (1) each candidate architecture is trained and evaluated on the dialogue dataset; (2) each architecture is trained on the train set of CNN/Daily Mail joined together with the train set of the dialogue data, and evaluated on the dialogue test set.",
"In addition, we prepare a version of dialogue data, in which utterances are separated with a special token called the separator (artificially added token e.g. '$<$EOU$>$' for models using word embeddings, '$|$' for models using subword embeddings). In all our experiments, news and dialogues are truncated to 400 tokens, and summaries – to 100 tokens. The maximum length of generated summaries was not limited."
],
[
"We carry out experiments with the following summarization models (for all architectures we set the beam size for beam search decoding to 5):",
"Pointer generator network BIBREF4. In the case of Pointer Generator, we use a default configuration, changing only the minimum length of the generated summary from 35 (used in news) to 15 (used in dialogues).",
"Transformer BIBREF16. The model is trained using OpenNMT library. We use the same parameters for training both on news and on dialogues, changing only the minimum length of the generated summary – 35 for news and 15 for dialogues.",
"Fast Abs RL BIBREF5. It is trained using its default parameters. For dialogues, we change the convolutional word-level sentence encoder (used in extractor part) to only use kernel with size equal 3 instead of 3-5 range. It is caused by the fact that some of utterances are very short and the default setting is unable to handle that.",
"Fast Abs RL Enhanced. The additional variant of the Fast Abs RL model with slightly changed utterances i.e. to each utterance, at the end, after artificial separator, we add names of all other interlocutors. The reason for that is that Fast Abs RL requires text to be split into sentences (as it selects sentences and then paraphrase each of them). For dialogues, we divide text into utterances (which is a natural unit in conversations), so sometimes, a single utterance may contain more than one sentence. Taking into account how this model works, it may happen that it selects an utterance of a single person (each utterance starts with the name of the author of the utterance) and has no information about other interlocutors (if names of other interlocutors do not appear in selected utterances), so it may have no chance to use the right people's names in generated summaries.",
"LightConv and DynamicConv BIBREF17. The implementation is available in fairseq BIBREF18. We train lightweight convolution models in two manners: (1) learning token representations from scratch; in this case we apply BPE tokenization with the vocabulary of 30K types, using fastBPE implementation BIBREF19; (2) initializing token embeddings with pre-trained language model representations; as a language model we choose GPT-2 small BIBREF20."
],
[
"We evaluate models with the standard ROUGE metric BIBREF21, reporting the $F_1$ scores (with stemming) for ROUGE-1, ROUGE-2 and ROUGE-L following previous works BIBREF5, BIBREF4. We obtain scores using the py-rouge package."
],
[
"The results for the news summarization task are shown in Table TABREF25 and for the dialogue summarization – in Table TABREF26. In both domains, the best models' ROUGE-1 exceeds 39, ROUGE-2 – 17 and ROUGE-L – 36. Note that the strong baseline for news (Lead-3) is outperformed in all three metrics only by one model. In the case of dialogues, all tested models perform better than the baseline (LONGEST-3).",
"In general, the Transformer-based architectures benefit from training on the joint dataset: news+dialogues, even though the news and the dialogue documents have very different structures. Interestingly, this does not seem to be the case for the Pointer Generator or Fast Abs RL model.",
"The inclusion of a separation token between dialogue utterances is advantageous for most models – presumably because it improves the discourse structure. The improvement is most visible when training is performed on the joint dataset.",
"Having compared two variants of the Fast Abs RL model – with original utterances and with enhanced ones (see Section SECREF11), we conclude that enhancing utterances with information about the other interlocutors helps achieve higher ROUGE values.",
"The largest improvement of the model performance is observed for LightConv and DynamicConv models when they are complemented with pretrained embeddings from the language model GPT-2, trained on enormous corpora.",
"It is also worth noting that some models (Pointer Generator, Fast Abs RL), trained only on the dialogues corpus (which has 16k dialogues), reach similar level (or better) in terms of ROUGE metrics than models trained on the CNN/DM news dataset (which has more than 300k articles). Adding pretrained embeddings and training on the joined dataset helps in achieving significantly higher values of ROUGE for dialogues than the best models achieve on the CNN/DM news dataset.",
"According to ROUGE metrics, the best performing model is DynamicConv with GPT-2 embeddings, trained on joined news and dialogue data with an utterance separation token."
],
[
"ROUGE is a standard way of evaluating the quality of machine generated summaries by comparing them with reference ones. The metric based on n-gram overlapping, however, may not be very informative for abstractive summarization, where paraphrasing is a keypoint in producing high-quality sentences. To quantify this conjecture, we manually evaluated summaries generated by the models for 150 news and 100 dialogues. We asked two linguists to mark the quality of every summary on the scale of $-1$, 0, 1, where $-1$ means that a summarization is poor, extracts irrelevant information or does not make sense at all, 1 – it is understandable and gives a brief overview of the text, and 0 stands for a summarization that extracts only a part of relevant information, or makes some mistakes in the produced summary.",
"We noticed a few annotations (7 for news and 4 for dialogues) with opposite marks (i.e. one annotator judgement was $-1$, whereas the second one was 1) and decided to have them annotated once again by another annotator who had to resolve conflicts. For the rest, we calculated the linear weighted Cohen's kappa coefficient BIBREF22 between annotators' scores. For news examples, we obtained agreement on the level of $0.371$ and for dialogues – $0.506$. The annotators' agreement is higher on dialogues than on news, probably because of structures of those data – articles are often long and it is difficult to decide what the key-point of the text is; dialogues, on the contrary, are rather short and focused mainly on one topic.",
"For manually evaluated samples, we calculated ROUGE metrics and the mean of two human ratings; the prepared statistics is presented in Table TABREF27. As we can see, models generating dialogue summaries can obtain high ROUGE results, but their outputs are marked as poor by human annotators. Our conclusion is that the ROUGE metric corresponds with the quality of generated summaries for news much better than for dialogues, confirmed by Pearson's correlation between human evaluation and the ROUGE metric, shown in Table TABREF28."
],
[
"In a structured text, such as a news article, the information flow is very clear. However, in a dialogue, which contains discussions (e.g. when people try to agree on a date of a meeting), questions (one person asks about something and the answer may appear a few utterances later) and greetings, most important pieces of information are scattered across the utterances of different speakers. What is more, articles are written in the third-person point of view, but in a chat everyone talks about themselves, using a variety of pronouns, which further complicates the structure. Additionally, people talking on messengers often are in a hurry, so they shorten words, use the slang phrases (e.g. 'u r gr8' means 'you are great') and make typos. These phenomena increase the difficulty of performing dialogue summarization.",
"Table TABREF34 and TABREF35 show a few selected dialogues, together with summaries produced by the best tested models:",
"DynamicConv + GPT-2 embeddings with a separator (trained on news + dialogues),",
"DynamicConv + GPT-2 embeddings (trained on news + dialogues),",
"Fast Abs RL (trained on dialogues),",
"Fast Abs RL Enhanced (trained on dialogues),",
"Transformer (trained on news + dialogues).",
"One can easily notice problematic issues. Firstly, the models frequently have difficulties in associating names with actions, often repeating the same name, e.g., for Dialogue 1 in Table TABREF34, Fast Abs RL generates the following summary: 'lilly and lilly are going to eat salmon'. To help the model deal with names, the utterances are enhanced by adding information about the other interlocutors – Fast Abs RL enhanced variant described in Section SECREF11. In this case, after enhancement, the model generates a summary containing both interlocutors' names: 'lily and gabriel are going to pasta...'. Sometimes models correctly choose speakers' names when generating a summary, but make a mistake in deciding who performs the action (the subject) and who receives the action (the object), e.g. for Dialogue 4 DynamicConv + GPT-2 emb. w/o sep. model generates the summary 'randolph will buy some earplugs for maya', while the correct form is 'maya will buy some earplugs for randolph'. A closely related problem is capturing the context and extracting information about the arrangements after the discussion. For instance, for Dialogue 4, the Fast Abs RL model draws a wrong conclusion from the agreed arrangement. This issue is quite frequently visible in summaries generated by Fast Abs RL, which may be the consequence of the way it is constructed; it first chooses important utterances, and then summarizes each of them separately. This leads to the narrowing of the context and loosing important pieces of information.",
"One more aspect of summary generation is deciding which information in the dialogue content is important. For instance, for Dialogue 3 DynamicConv + GPT-2 emb. with sep. generates a correct summary, but focuses on a piece of information different than the one included in the reference summary. In contrast, some other models – like Fast Abs RL enhanced – select both of the pieces of information appearing in the discussion. On the other hand, when summarizing Dialogue 5, the models seem to focus too much on the phrase 'it's the best place', intuitively not the most important one to summarize."
],
[
"This paper is a step towards abstractive summarization of dialogues by (1) introducing a new dataset, created for this task, (2) comparison with news summarization by the means of automated (ROUGE) and human evaluation.",
"Most of the tools and the metrics measuring the quality of text summarization have been developed for a single-speaker document, such as news; as such, they are not necessarily the best choice for conversations with several speakers.",
"We test a few general-purpose summarization models. In terms of human evaluation, the results of dialogues summarization are worse than the results of news summarization. This is connected with the fact that the dialogue structure is more complex – information is spread in multiple utterances, discussions, questions, more typos and slang words appear there, posing new challenges for summarization. On the other hand, dialogues are divided into utterances, and for each utterance its author is assigned. We demonstrate in experiments that the models benefit from the introduction of separators, which mark utterances for each person. This suggests that dedicated models having some architectural changes, taking into account the assignation of a person to an utterance in a systematic manner, could improve the quality of dialogue summarization.",
"We show that the most popular summarization metric ROUGE does not reflect the quality of a summary. Looking at the ROUGE scores, one concludes that the dialogue summarization models perform better than the ones for news summarization. In fact, this hypothesis is not true – we performed an independent, manual analysis of summaries and we demonstrated that high ROUGE results, obtained for automatically-generated dialogue summaries, correspond with lower evaluation marks given by human annotators. An interesting example of the misleading behavior of the ROUGE metrics is presented in Table TABREF35 for Dialogue 4, where a wrong summary – 'paul and cindy don't like red roses.' – obtained all ROUGE values higher than a correct summary – 'paul asks cindy what color flowers should buy.'. Despite lower ROUGE values, news summaries were scored higher by human evaluators. We conclude that when measuring the quality of model-generated summaries, the ROUGE metrics are more indicative for news than for dialogues, and a new metric should be designed to measure the quality of abstractive dialogue summaries."
],
[
"In our paper we have studied the challenges of abstractive dialogue summarization. We have addressed a major factor that prevents researchers from engaging into this problem: the lack of a proper dataset. To the best of our knowledge, this is the first attempt to create a comprehensive resource of this type which can be used in future research. The next step could be creating an even more challenging dataset with longer dialogues that not only cover one topic, but span over numerous different ones.",
"As shown, summarization of dialogues is much more challenging than of news. In order to perform well, it may require designing dedicated tools, but also new, non-standard measures to capture the quality of abstractive dialogue summaries in a relevant way. We hope to tackle these issues in future work."
],
[
"We would like to express our sincere thanks to Tunia Błachno, Oliwia Ebebenge, Monika Jędras and Małgorzata Krawentek for their huge contribution to the corpus collection – without their ideas, management of the linguistic task and verification of examples we would not be able to create this paper. We are also grateful for the reviewers' helpful comments and suggestions."
]
],
"section_name": [
"Introduction and related work",
"SAMSum Corpus",
"Dialogues baselines",
"Experimental setup",
"Experimental setup ::: Data preparation",
"Experimental setup ::: Models",
"Experimental setup ::: Evaluation metrics",
"Results",
"Linguistic verification of summaries",
"Difficulties in dialogue summarization",
"Discussion",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"7ad578af0551699c375350431025392761af3190",
"cab6d3fc00345cb48f22c5f541009b92611e06a9"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For this purpose we have created the SAMSum Corpus which contains over 16k chat dialogues with manually annotated summaries.",
"Each dialogue contains only one reference summary."
],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary. Validation. Since the SAMSum corpus contains dialogues created by linguists, the question arises whether such conversations are really similar to those typically written via messenger apps. To find the answer, we performed a validation task. We asked two linguists to doubly annotate 50 conversations in order to verify whether the dialogues could appear in a messenger app and could be summarized (i.e. a dialogue is not too general or unintelligible) or not (e.g. a dialogue between two people in a shop). The results revealed that 94% of examined dialogues were classified by both annotators as good i.e. they do look like conversations from a messenger app and could be condensed in a reasonable way. In a similar validation task, conducted for the existing dialogue-type datasets (described in the Initial approach section), the annotators agreed that only 28% of the dialogues resembled conversations from a messenger app."
],
"extractive_spans": [
"Each dialogue contains only one reference summary."
],
"free_form_answer": "",
"highlighted_evidence": [
"Each dialogue contains only one reference summary."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"93d827788135491221ed7f1a164bb215fe969991",
"99797165a4ab1f6224af227ed6df59a7fcfb56e3"
],
"answer": [
{
"evidence": [
"ROUGE is a standard way of evaluating the quality of machine generated summaries by comparing them with reference ones. The metric based on n-gram overlapping, however, may not be very informative for abstractive summarization, where paraphrasing is a keypoint in producing high-quality sentences. To quantify this conjecture, we manually evaluated summaries generated by the models for 150 news and 100 dialogues. We asked two linguists to mark the quality of every summary on the scale of $-1$, 0, 1, where $-1$ means that a summarization is poor, extracts irrelevant information or does not make sense at all, 1 – it is understandable and gives a brief overview of the text, and 0 stands for a summarization that extracts only a part of relevant information, or makes some mistakes in the produced summary."
],
"extractive_spans": [
"We asked two linguists to mark the quality of every summary on the scale of $-1$, 0, 1, where $-1$ means that a summarization is poor, extracts irrelevant information or does not make sense at all, 1 – it is understandable and gives a brief overview of the text, and 0 stands for a summarization that extracts only a part of relevant information, or makes some mistakes in the produced summary."
],
"free_form_answer": "",
"highlighted_evidence": [
"We asked two linguists to mark the quality of every summary on the scale of $-1$, 0, 1, where $-1$ means that a summarization is poor, extracts irrelevant information or does not make sense at all, 1 – it is understandable and gives a brief overview of the text, and 0 stands for a summarization that extracts only a part of relevant information, or makes some mistakes in the produced summary."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"ROUGE is a standard way of evaluating the quality of machine generated summaries by comparing them with reference ones. The metric based on n-gram overlapping, however, may not be very informative for abstractive summarization, where paraphrasing is a keypoint in producing high-quality sentences. To quantify this conjecture, we manually evaluated summaries generated by the models for 150 news and 100 dialogues. We asked two linguists to mark the quality of every summary on the scale of $-1$, 0, 1, where $-1$ means that a summarization is poor, extracts irrelevant information or does not make sense at all, 1 – it is understandable and gives a brief overview of the text, and 0 stands for a summarization that extracts only a part of relevant information, or makes some mistakes in the produced summary."
],
"extractive_spans": [
"$-1$ means that a summarization is poor, extracts irrelevant information or does not make sense at all",
"0 stands for a summarization that extracts only a part of relevant information, or makes some mistakes in the produced summary",
"1 – it is understandable and gives a brief overview of the text"
],
"free_form_answer": "",
"highlighted_evidence": [
"To quantify this conjecture, we manually evaluated summaries generated by the models for 150 news and 100 dialogues. We asked two linguists to mark the quality of every summary on the scale of $-1$, 0, 1, where $-1$ means that a summarization is poor, extracts irrelevant information or does not make sense at all, 1 – it is understandable and gives a brief overview of the text, and 0 stands for a summarization that extracts only a part of relevant information, or makes some mistakes in the produced summary."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"51bca0847aa9890bc5d2fc0fad4b6a97d15c207a",
"96cfe40f8f24d4725cefa5a7bd54293c03ecb0ad"
],
"answer": [
{
"evidence": [
"The baseline commonly used in the news summarization task is Lead-3 BIBREF4, which takes three leading sentences of the document as the summary. The underlying assumption is that the beginning of the article contains the most significant information. Inspired by the Lead-n model, we propose a few different simple models:",
"MIDDLE-n, which takes n utterances from the middle of the dialogue,",
"LONGEST-n, treating only n longest utterances in order of length as a summary,",
"LONGER-THAN-n, taking only utterances longer than n characters in order of length (if there is no such long utterance in the dialogue, takes the longest one),",
"MOST-ACTIVE-PERSON, which treats all utterances of the most active person in the dialogue as a summary.",
"We carry out experiments with the following summarization models (for all architectures we set the beam size for beam search decoding to 5):",
"Pointer generator network BIBREF4. In the case of Pointer Generator, we use a default configuration, changing only the minimum length of the generated summary from 35 (used in news) to 15 (used in dialogues).",
"Transformer BIBREF16. The model is trained using OpenNMT library. We use the same parameters for training both on news and on dialogues, changing only the minimum length of the generated summary – 35 for news and 15 for dialogues.",
"Fast Abs RL BIBREF5. It is trained using its default parameters. For dialogues, we change the convolutional word-level sentence encoder (used in extractor part) to only use kernel with size equal 3 instead of 3-5 range. It is caused by the fact that some of utterances are very short and the default setting is unable to handle that.",
"Fast Abs RL Enhanced. The additional variant of the Fast Abs RL model with slightly changed utterances i.e. to each utterance, at the end, after artificial separator, we add names of all other interlocutors. The reason for that is that Fast Abs RL requires text to be split into sentences (as it selects sentences and then paraphrase each of them). For dialogues, we divide text into utterances (which is a natural unit in conversations), so sometimes, a single utterance may contain more than one sentence. Taking into account how this model works, it may happen that it selects an utterance of a single person (each utterance starts with the name of the author of the utterance) and has no information about other interlocutors (if names of other interlocutors do not appear in selected utterances), so it may have no chance to use the right people's names in generated summaries.",
"LightConv and DynamicConv BIBREF17. The implementation is available in fairseq BIBREF18. We train lightweight convolution models in two manners: (1) learning token representations from scratch; in this case we apply BPE tokenization with the vocabulary of 30K types, using fastBPE implementation BIBREF19; (2) initializing token embeddings with pre-trained language model representations; as a language model we choose GPT-2 small BIBREF20."
],
"extractive_spans": [],
"free_form_answer": "MIDDLE-n, LONGEST-n, LONGER-THAN-n and MOST-ACTIVE-PERSON are the baselines, and experiments also carried out on Pointer generator networks, Transformers, Fast Abs RL, Fast Abs RL Enhanced, LightConv and DynamicConv ",
"highlighted_evidence": [
"The baseline commonly used in the news summarization task is Lead-3 BIBREF4, which takes three leading sentences of the document as the summary. The underlying assumption is that the beginning of the article contains the most significant information. Inspired by the Lead-n model, we propose a few different simple models:\n\nMIDDLE-n, which takes n utterances from the middle of the dialogue,\n\nLONGEST-n, treating only n longest utterances in order of length as a summary,\n\nLONGER-THAN-n, taking only utterances longer than n characters in order of length (if there is no such long utterance in the dialogue, takes the longest one),\n\nMOST-ACTIVE-PERSON, which treats all utterances of the most active person in the dialogue as a summary.",
"We carry out experiments with the following summarization models (for all architectures we set the beam size for beam search decoding to 5):\n\nPointer generator network BIBREF4. In the case of Pointer Generator, we use a default configuration, changing only the minimum length of the generated summary from 35 (used in news) to 15 (used in dialogues).\n\nTransformer BIBREF16. The model is trained using OpenNMT library. We use the same parameters for training both on news and on dialogues, changing only the minimum length of the generated summary – 35 for news and 15 for dialogues.\n\nFast Abs RL BIBREF5. It is trained using its default parameters. For dialogues, we change the convolutional word-level sentence encoder (used in extractor part) to only use kernel with size equal 3 instead of 3-5 range. It is caused by the fact that some of utterances are very short and the default setting is unable to handle that.\n\nFast Abs RL Enhanced. The additional variant of the Fast Abs RL model with slightly changed utterances i.e. to each utterance, at the end, after artificial separator, we add names of all other interlocutors. The reason for that is that Fast Abs RL requires text to be split into sentences (as it selects sentences and then paraphrase each of them). For dialogues, we divide text into utterances (which is a natural unit in conversations), so sometimes, a single utterance may contain more than one sentence. Taking into account how this model works, it may happen that it selects an utterance of a single person (each utterance starts with the name of the author of the utterance) and has no information about other interlocutors (if names of other interlocutors do not appear in selected utterances), so it may have no chance to use the right people's names in generated summaries.\n\nLightConv and DynamicConv BIBREF17. The implementation is available in fairseq BIBREF18. We train lightweight convolution models in two manners: (1) learning token representations from scratch; in this case we apply BPE tokenization with the vocabulary of 30K types, using fastBPE implementation BIBREF19; (2) initializing token embeddings with pre-trained language model representations; as a language model we choose GPT-2 small BIBREF20."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We carry out experiments with the following summarization models (for all architectures we set the beam size for beam search decoding to 5):",
"Pointer generator network BIBREF4. In the case of Pointer Generator, we use a default configuration, changing only the minimum length of the generated summary from 35 (used in news) to 15 (used in dialogues).",
"Transformer BIBREF16. The model is trained using OpenNMT library. We use the same parameters for training both on news and on dialogues, changing only the minimum length of the generated summary – 35 for news and 15 for dialogues.",
"Fast Abs RL BIBREF5. It is trained using its default parameters. For dialogues, we change the convolutional word-level sentence encoder (used in extractor part) to only use kernel with size equal 3 instead of 3-5 range. It is caused by the fact that some of utterances are very short and the default setting is unable to handle that.",
"Fast Abs RL Enhanced. The additional variant of the Fast Abs RL model with slightly changed utterances i.e. to each utterance, at the end, after artificial separator, we add names of all other interlocutors. The reason for that is that Fast Abs RL requires text to be split into sentences (as it selects sentences and then paraphrase each of them). For dialogues, we divide text into utterances (which is a natural unit in conversations), so sometimes, a single utterance may contain more than one sentence. Taking into account how this model works, it may happen that it selects an utterance of a single person (each utterance starts with the name of the author of the utterance) and has no information about other interlocutors (if names of other interlocutors do not appear in selected utterances), so it may have no chance to use the right people's names in generated summaries.",
"LightConv and DynamicConv BIBREF17. The implementation is available in fairseq BIBREF18. We train lightweight convolution models in two manners: (1) learning token representations from scratch; in this case we apply BPE tokenization with the vocabulary of 30K types, using fastBPE implementation BIBREF19; (2) initializing token embeddings with pre-trained language model representations; as a language model we choose GPT-2 small BIBREF20."
],
"extractive_spans": [
"Pointer generator network",
"Transformer",
"Fast Abs RL",
"Fast Abs RL Enhanced",
"LightConv and DynamicConv"
],
"free_form_answer": "",
"highlighted_evidence": [
"We carry out experiments with the following summarization models (for all architectures we set the beam size for beam search decoding to 5):\n\nPointer generator network BIBREF4. In the case of Pointer Generator, we use a default configuration, changing only the minimum length of the generated summary from 35 (used in news) to 15 (used in dialogues).\n\nTransformer BIBREF16. The model is trained using OpenNMT library. We use the same parameters for training both on news and on dialogues, changing only the minimum length of the generated summary – 35 for news and 15 for dialogues.\n\nFast Abs RL BIBREF5. It is trained using its default parameters. For dialogues, we change the convolutional word-level sentence encoder (used in extractor part) to only use kernel with size equal 3 instead of 3-5 range.",
"Fast Abs RL Enhanced. The additional variant of the Fast Abs RL model with slightly changed utterances i.e. to each utterance, at the end, after artificial separator, we add names of all other interlocutors.",
"LightConv and DynamicConv BIBREF17. The implementation is available in fairseq BIBREF18."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"005a0ad93a317df31a191cc8cbee72be8c9114ef",
"1bf3016c2617457e7f57232254f7207f7a0c7e77"
],
"answer": [
{
"evidence": [
"We show that the most popular summarization metric ROUGE does not reflect the quality of a summary. Looking at the ROUGE scores, one concludes that the dialogue summarization models perform better than the ones for news summarization. In fact, this hypothesis is not true – we performed an independent, manual analysis of summaries and we demonstrated that high ROUGE results, obtained for automatically-generated dialogue summaries, correspond with lower evaluation marks given by human annotators. An interesting example of the misleading behavior of the ROUGE metrics is presented in Table TABREF35 for Dialogue 4, where a wrong summary – 'paul and cindy don't like red roses.' – obtained all ROUGE values higher than a correct summary – 'paul asks cindy what color flowers should buy.'. Despite lower ROUGE values, news summaries were scored higher by human evaluators. We conclude that when measuring the quality of model-generated summaries, the ROUGE metrics are more indicative for news than for dialogues, and a new metric should be designed to measure the quality of abstractive dialogue summaries."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We conclude that when measuring the quality of model-generated summaries, the ROUGE metrics are more indicative for news than for dialogues, and a new metric should be designed to measure the quality of abstractive dialogue summaries."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We evaluate models with the standard ROUGE metric BIBREF21, reporting the $F_1$ scores (with stemming) for ROUGE-1, ROUGE-2 and ROUGE-L following previous works BIBREF5, BIBREF4. We obtain scores using the py-rouge package.",
"We show that the most popular summarization metric ROUGE does not reflect the quality of a summary. Looking at the ROUGE scores, one concludes that the dialogue summarization models perform better than the ones for news summarization. In fact, this hypothesis is not true – we performed an independent, manual analysis of summaries and we demonstrated that high ROUGE results, obtained for automatically-generated dialogue summaries, correspond with lower evaluation marks given by human annotators. An interesting example of the misleading behavior of the ROUGE metrics is presented in Table TABREF35 for Dialogue 4, where a wrong summary – 'paul and cindy don't like red roses.' – obtained all ROUGE values higher than a correct summary – 'paul asks cindy what color flowers should buy.'. Despite lower ROUGE values, news summaries were scored higher by human evaluators. We conclude that when measuring the quality of model-generated summaries, the ROUGE metrics are more indicative for news than for dialogues, and a new metric should be designed to measure the quality of abstractive dialogue summaries."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate models with the standard ROUGE metric BIBREF21, reporting the $F_1$ scores (with stemming) for ROUGE-1, ROUGE-2 and ROUGE-L following previous works BIBREF5, BIBREF4. ",
"We conclude that when measuring the quality of model-generated summaries, the ROUGE metrics are more indicative for news than for dialogues, and a new metric should be designed to measure the quality of abstractive dialogue summaries."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"35ef51c80e090912025415ebd973e4afab82ac50",
"bfc603ac673fa3221d9fcadc409fe7ebe7e6f768"
],
"answer": [
{
"evidence": [
"In the present paper, we further investigate the problem of abstractive dialogue summarization. With the growing popularity of online conversations via applications like Messenger, WhatsApp and WeChat, summarization of chats between a few participants is a new interesting direction of summarization research. For this purpose we have created the SAMSum Corpus which contains over 16k chat dialogues with manually annotated summaries. The dataset is freely available for the research community.",
"Description. The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in conversations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people. Table TABREF3 presents the size of the dataset split used in our experiments. The example of a dialogue from this corpus is shown in Table TABREF4."
],
"extractive_spans": [
"16369 conversations"
],
"free_form_answer": "",
"highlighted_evidence": [
"For this purpose we have created the SAMSum Corpus which contains over 16k chat dialogues with manually annotated summaries.",
"The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in conversations: 3-6, 7-12, 13-18 and 19-30."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the present paper, we further investigate the problem of abstractive dialogue summarization. With the growing popularity of online conversations via applications like Messenger, WhatsApp and WeChat, summarization of chats between a few participants is a new interesting direction of summarization research. For this purpose we have created the SAMSum Corpus which contains over 16k chat dialogues with manually annotated summaries. The dataset is freely available for the research community."
],
"extractive_spans": [
"contains over 16k chat dialogues with manually annotated summaries"
],
"free_form_answer": "",
"highlighted_evidence": [
"For this purpose we have created the SAMSum Corpus which contains over 16k chat dialogues with manually annotated summaries."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How many abstractive summarizations exist for each dialogue?",
"How is human evaluators' judgement measured, what was the criteria?",
"What models have been evaluated?",
"Do authors propose some better metric than ROUGE for measurement of abstractive dialogue summarization?",
"How big is SAMSum Corpus?"
],
"question_id": [
"44bf3047ff7e5c6b727b2aaa0805dd66c907dcd6",
"c6f2598b85dc74123fe879bf23aafc7213853f5b",
"bdae851d4cf1d05506cf3e8359786031ac4f756f",
"894bbb1e42540894deb31c04cba0e6cfb10ea912",
"75b3e2d2caec56e5c8fbf6532070b98d70774b95"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Datasets sizes",
"Table 3: Baselines for the dialogues summarization",
"Table 2: Example of a dialogue from the collected corpus",
"Table 4: Model evaluation on the news corpus test set",
"Table 5: Model evaluation on the dialogues corpus test set",
"Table 6: Statistics of human evaluation of summaries’ quality and ROUGE evaluation of those summaries",
"Table 7: Pearson’s correlations between human judgement and ROUGE metric",
"Table 8: Examples of dialogues (Part 1). REF – reference summary, L3 – LONGEST-3 baseline, DS – DynamicConv + GPT-2 emb. with sep., D – DynamicConv + GPT-2 emb., F – Fast Abs RL, FE – Fast Abs RL Enhanced, T – Transformer. For L3, three longest utterances are listed. Rounded ROUGE values [R-1/R-2/R-L] are given in square brackets.",
"Table 9: Examples of dialogues (Part 2). REF – reference summary, L3 – LONGEST-3 baseline, DS – DynamicConv + GPT-2 emb. with sep., D – DynamicConv + GPT-2 emb., F – Fast Abs RL, FE – Fast Abs RL Enhanced, T – Transformer. For L3, three longest utterances are listed. Rounded ROUGE values [R-1/R-2/R-L] are given in square brackets."
],
"file": [
"2-Table1-1.png",
"3-Table3-1.png",
"3-Table2-1.png",
"5-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png",
"8-Table8-1.png",
"9-Table9-1.png"
]
} | [
"What models have been evaluated?"
] | [
[
"1911.12237-Experimental setup ::: Models-5",
"1911.12237-Dialogues baselines-4",
"1911.12237-Experimental setup ::: Models-3",
"1911.12237-Experimental setup ::: Models-4",
"1911.12237-Dialogues baselines-0",
"1911.12237-Dialogues baselines-3",
"1911.12237-Experimental setup ::: Models-1",
"1911.12237-Experimental setup ::: Models-2",
"1911.12237-Experimental setup ::: Models-0",
"1911.12237-Dialogues baselines-2",
"1911.12237-Dialogues baselines-1"
]
] | [
"MIDDLE-n, LONGEST-n, LONGER-THAN-n and MOST-ACTIVE-PERSON are the baselines, and experiments also carried out on Pointer generator networks, Transformers, Fast Abs RL, Fast Abs RL Enhanced, LightConv and DynamicConv "
] | 206 |
1909.07873 | Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model | Recently, generating adversarial examples has become an important means of measuring robustness of a deep learning model. Adversarial examples help us identify the susceptibilities of the model and further counter those vulnerabilities by applying adversarial training techniques. In natural language domain, small perturbations in the form of misspellings or paraphrases can drastically change the semantics of the text. We propose a reinforcement learning based approach towards generating adversarial examples in black-box settings. We demonstrate that our method is able to fool well-trained models for (a) IMDB sentiment classification task and (b) AG's news corpus news categorization task with significantly high success rates. We find that the adversarial examples generated are semantics-preserving perturbations to the original text. | {
"paragraphs": [
[
"Adversarial examples are generally minimal perturbations applied to the input data in an effort to expose the regions of the input space where a trained model performs poorly. Prior works BIBREF0, BIBREF1 have demonstrated the ability of an adversary to evade state-of-the-art classifiers by carefully crafting attack examples which can be even imperceptible to humans. Following such approaches, there has been a number of techniques aimed at generating adversarial examples BIBREF2, BIBREF3. Depending on the degree of access to the target model, an adversary may operate in one of the two different settings: (a) black-box setting, where an adversary doesn't have access to target model's internal architecture or its parameters, (b) white-box setting, where an adversary has access to the target model, its parameters, and input feature representations. In both these settings, the adversary cannot alter the training data or the target model itself. Depending on the purpose of the adversary, adversarial attacks can be categorized as (a) targeted attack and (b) non-targeted attack. In a targeted attack, the output category of a generated example is intentionally controlled to a specific target category with limited change in semantic information. While a non-targeted attack doesn't care about the category of misclassified results.",
"Most of the prior work has focused on image classification models where adversarial examples are obtained by introducing imperceptible changes to pixel values through optimization techniques BIBREF4, BIBREF5. However, generating natural language adversarial examples can be challenging mainly due to the discrete nature of text samples. Continuous data like image or speech is much more tolerant to perturbations compared to text BIBREF6. In textual domain, even a small perturbation is clearly perceptible and can completely change the semantics of the text. Another challenge for generating adversarial examples relates to identifying salient areas of the text where a perturbation can be applied successfully to fool the target classifier. In addition to fooling the target classifier, the adversary is designed with different constraints depending on the task and its motivations BIBREF7. In our work, we focus on constraining our adversary to craft examples with semantic preservation and minimum perturbations to the input text.",
"Given different settings of the adversary, there are other works that have designed attacks in “gray-box” settings BIBREF8, BIBREF9, BIBREF10. However, the definitions of “gray-box” attacks are quite different in each of these approaches. In this paper, we focus on “black-box” setting where we assume that the adversary possesses a limited set of labeled data, which is different from the target's training data, and also has an oracle access to the system, i.e., one can query the target classifier with any input and get its corresponding predictions. We propose an effective technique to generate adversarial examples in a black-box setting. We develop an Adversarial Example Generator (AEG) model that uses a reinforcement learning framing to generate adversarial examples. We evaluate our models using a word-based BIBREF11 and character-based BIBREF12 text classification model on benchmark classification tasks: sentiment classification and news categorization. The adversarial sequences generated are able to effectively fool the classifiers without changing the semantics of the text. Our contributions are as follows:",
"We propose a black-box non-targeted attack strategy by combining ideas of substitute network and adversarial example generation. We formulate it as a reinforcement learning task.",
"We introduce an encoder-decoder that operates over words and characters of an input text and empowers the model to introduce word and character-level perturbations.",
"We adopt a self-critical sequence training technique to train our model to generate examples that can fool or increase the probability of misclassification in text classifiers.",
"We evaluate our models on two different datasets associated with two different tasks: IMDB sentiment classification and AG's news categorization task. We run ablation studies on various components of the model and provide insights into decisions of our model."
],
[
"Generating adversarial examples to bypass deep learning classification models have been widely studied. In a white-box setting, some of the approaches include gradient-based BIBREF13, BIBREF6, decision function-based BIBREF2 and spatial transformation based perturbation techniquesBIBREF3. In a black-box setting, several attack strategies have been proposed based on the property of transferability BIBREF1. Papernot et al. BIBREF14, BIBREF15 relied on this transferability property where adversarial examples, generated on one classifier, are likely to cause another classifier to make the same mistake, irrespective of their architecture and training dataset. In order to generate adversarial samples, a local substitute model was trained with queries to the target model. Many learning systems allow query accesses to the model. However, there is little work that can leverage query-based access to target models to construct adversarial samples and move beyond transferability. These studies have primarily focused on image-based classifiers and cannot be directly applied to text-based classifiers.",
"While there is limited literature for such approaches in NLP systems, there have been some studies that have exposed the vulnerabilities of neural networks in text-based tasks like machine translations and question answering. Belinkov and Bisk BIBREF16 investigated the sensitivity of neural machine translation (NMT) to synthetic and natural noise containing common misspellings. They demonstrate that state-of-the-art models are vulnerable to adversarial attacks even after a spell-checker is deployed. Jia et al. BIBREF17 showed that networks trained for more difficult tasks, such as question answering, can be easily fooled by introducing distracting sentences into text, but these results do not transfer obviously to simpler text classification tasks. Following such works, different methods with the primary purpose of crafting adversarial example have been explored. Recently, a work by Ebrahimi et al. BIBREF18 developed a gradient-based optimization method that manipulates discrete text structure at its one-hot representation to generate adversarial examples in a white-box setting. In another white-box based attack, Gong et al. BIBREF19 perturbed the word embedding of given text examples and projected them to the nearest neighbour in the embedding space. This approach is an adaptation of perturbation algorithms for images. Though the size and quality of embedding play a critical role, this targeted attack technique ensured that the generated text sequence is intelligible.",
"Alzantot et al. BIBREF20 proposed a black-box targeted attack using a population-based optimization via genetic algorithm BIBREF21. The perturbation procedure consists of random selection of words, finding their nearest neighbours, ranking and substitution to maximize the probability of target category. In this method, random word selection in the sequence to substitute were full of uncertainties and might be meaningless for the target label when changed. Since our model focuses on black-box non-targeted attack using an encoder-decoder approach, our work is closely related to the following techniques in the literature: Wong (2017) BIBREF22, Iyyer et al. BIBREF23 and Gao et al. BIBREF24. Wong (2017) BIBREF22 proposed a GAN-inspired method to generate adversarial text examples targeting black-box classifiers. However, this approach was restricted to binary text classifiers. Iyyer et al. BIBREF23 crafted adversarial examples using their proposed Syntactically Controlled Paraphrase Networks (SCPNs). They designed this model for generating syntactically adversarial examples without compromising on the quality of the input semantics. The general process is based on the encoder-decoder architecture of SCPN. Gao et al. BIBREF24 implemented an algorithm called DeepWordBug that generates small text perturbations in a black box setting forcing the deep learning model to make mistakes. DeepWordBug used a scoring function to determine important tokens and then applied character-level transformations to those tokens. Though the algorithm successfully generates adversarial examples by introducing character-level attacks, most of the introduced perturbations are constricted to misspellings. The semantics of the text may be irreversibly changed if excessive misspellings are introduced to fool the target classifier. While SCPNs and DeepWordBug primary rely only on paraphrases and character transformations respectively to fool the classifier, our model uses a hybrid word-character encoder-decoder approach to introduce both paraphrases and character-level perturbations as a part of our attack strategy. Our attacks can be a test of how robust the text classification models are to word and character-level perturbations."
],
[
"Let us consider a target model $T$ and $(x,l)$ refers to the samples from the dataset. Given an instance $x$, the goal of the adversary is to generate adversarial examples $x^{\\prime }$ such that $T(x^{\\prime }) \\ne l$, where $l$ denotes the true label i.e take one of the $K$ classes of the target classification model. The changes made to $x$ to get $x^{\\prime }$ are called perturbations. We would like to have $x^{\\prime }$ close to the original instance $x$. In a black box setting, we do not have knowledge about the internals of the target model or its training data. Previous work by Papernot et al. BIBREF14 train a separate substitute classifier such that it can mimic the decision boundaries of the target classifier. The substitute classifier is then used to craft adversarial examples. While these techniques have been applied for image classification models, such methods have not been explored extensively for text.",
"We implement both the substitute network training and adversarial example generation using an encoder-decoder architecture called Adversarial Examples Generator (AEG). The encoder extracts the character and word information from the input text and produces hidden representations of words considering its sequence context information. A substitute network is not implemented separately but applied using an attention mechanism to weigh the encoded hidden states based on their relevance to making predictions closer to target model outputs. The attention scores provide certain level of interpretability to the model as the regions of text that need to perturbed can be identified and visualized. The decoder uses the attention scores obtained from the substitute network, combines it with decoder state information to decide if perturbation is required at this state or not and finally emits the text unit (a text unit may refer to a word or character). Inspired by a work by Luong et al. BIBREF25, the decoder is a word and character-level recurrent network employed to generate adversarial examples. Before the substitute network is trained, we pretrain our encoder-decoder model on common misspellings and paraphrase datasets to empower the model to produce character and word perturbations in the form of misspellings or paraphrases. For training substitute network and generation of adversarial examples, we randomly draw data that is disjoint from the training data of the black-box model since we assume the adversaries have no prior knowledge about the training data or the model. Specifically, we consider attacking a target classifier by generating adversarial examples based on unseen input examples. This is done by dividing the dataset into training, validation and test using 60-30-10 ratio. The training data is used by the target model, while the unseen validation samples are used with necessary data augmentation for our AEG model. We further improve our model by using a self-critical approach to finally generate better adversarial examples. The rewards are formulated based on the following goals: (a) fool the target classifier, (b) minimize the number of perturbations and (c) preserve the semantics of the text. In the following sections, we explain the encoder-decoder model and then describe the reinforcement learning framing towards generation of adversarial examples."
],
[
"Most of the sequence generation models follow an encoder-decoder framework BIBREF26, BIBREF27, BIBREF28 where encoder and decoder are modelled by separate recurrent neural networks. Usually these models are trained using a pair of text $(x,y)$ where $x=[x_1, x_2..,x_n]$ is the input text and the $y=[y_1, y_2..,y_m]$ is the target text to be generated. The encoder transforms an input text sequence into an abstract representation $h$. While the decoder is employed to generate the target sequence using the encoded representation $h$. However, there are several studies that have incorporated several modifications to the standard encoder-decoder framework BIBREF29, BIBREF25, BIBREF30."
],
[
"Based on Bahdanau et al. BIBREF29, we encode the input text sequence using bidirectional gated recurrent units (GRUs) to encode the input text sequence $x$. Formally, we obtain an encoded representation given by: $\\overleftrightarrow{h_t}= \\overleftarrow{h_t} + \\overrightarrow{h_t}$."
],
[
"The decoder is a forward GRU implementing an attention mechanism to recognize the units of input text sequence relevant for the generation of the next target work. The decoder GRU generates the next text unit at time step $j$ by conditioning on the current decoder state $s_j$, context vector $c_j$ computed using attention mechanism and previously generated text units. The probability of decoding each target unit is given by:",
"where $f_d$ is used to compute a new attentional hidden state $\\tilde{s_j}$. Given the encoded input representations $\\overleftrightarrow{H}=\\lbrace \\overleftrightarrow{h_1}, ...,\\overleftrightarrow{h_n}\\rbrace $ and the previous decoder GRU state $s_{j-1}$, the context vector at time step $j$ is computed as: $c_j= Attn(\\overleftrightarrow{H}, s_{j-1})$. $Attn(\\cdot ,\\cdot )$ computes a weight $\\alpha _{jt}$ indicating the degree of relevance of an input text unit $x_t$ for predicting the target unit $y_j$ using a feed-forward network $f_{attn}$. Given a parallel corpus $D$, we train our model by minimizing the cross-entropy loss: $J=\\sum _{(x,y)\\in D}{-log p(y|x)}$."
],
[
"In this task of adversarial example generation, we have black-box access to the target model; the generator is not aware of the target model architecture or parameters and is only capable of querying the target model with supplied inputs and obtaining the output predictions. To enable the model to have capabilities to generate word and character perturbations, we develop a hybrid encoder-decoder model, Adversarial Examples Generator (AEG), that operates at both word and character level to generate adversarial examples. Below, we explain the components of this model which have been improved to handle both word and character information from the text sequence."
],
[
"The encoder maps the input text sequence into a sequence of representations using word and character-level information. Our encoder (Figure FIGREF10) is a slight variant of Chen et al.BIBREF31. This approach providing multiple levels of granularity can be useful in order to handle rare or noisy words in the text. Given character embeddings $E^{(c)}=[e_1^{(c)}, e_2^{(c)},...e_{n^{\\prime }}^{(c)}]$ and word embeddings $E^{(w)}=[e_1^{(w)}, e_2^{(w)},...e_{n}^{(w)}]$ of the input, starting ($p_t$) and ending ($q_t$) character positions at time step $t$, we define inside character embeddings as: $E_I^{(c)}=[e_{p_t}^{(c)},...., e_{q_t}^{(c)}]$ and outside embeddings as: $E_O^{(c)}=[e_{1}^{(c)},....,e_{p_t-1}^{(c)}; e_{q_t+1}^{(c)},...,e_{n^{\\prime }}^{(c)}]$. First, we obtain the character-enhanced word representation $\\overleftrightarrow{h_t}$ by combining the word information from $E^{(w)}$ with the character context vectors. Character context vectors are obtained by attending over inside and outside character embeddings. Next, we compute a summary vector $S$ over the hidden states $\\overleftrightarrow{h_t}$ using an attention layer expressed as $Attn(\\overleftrightarrow{H})$. To generate adversarial examples, it is important to identify the most relevant text units that contribute towards the target model's prediction and then use this information during the decoding step to introduce perturbation on those units. Hence, the summary vector is optimized using target model predictions without back propagating through the entire encoder. This acts as a substitute network that learns to mimic the predictions of the target classifier."
],
[
"Our AEG should be able to generate both character and word level perturbations as necessary. We achieve this by modifying the standard decoder BIBREF29, BIBREF30 to have two-level decoder GRUs: word-GRU and character-GRU (see Figure FIGREF14). Such hybrid approaches have been studied to achieve open vocabulary NMT in some of the previous work like Wu et al. BIBREF32 and Luong et al. BIBREF25. Given the challenge that all different word misspellings cannot fit in a fixed vocabulary, we leverage the power of both words and characters in our generation procedure. The word-GRU uses word context vector $c_j^{(w)}$ by attending over the encoder hidden states $\\overleftrightarrow{h_t}$. Once the word context vector $c_j^{(w)}$ is computed, we introduce a perturbation vector $v_{p}$ to impart information about the need for any word or character perturbations at this decoding step. We construct this vector using the word-GRU decoder state $s_j^{(w)}$, context vector $c_j^{(w)}$ and summary vector $S$ from the encoder as:",
"We modify the the Equation (DISPLAY_FORM8) as: $\\tilde{s}_j^{(w)}=f_{d}^{(w)}([c_j^{(w)};s_j^{(w)};v_{p}])$. The character-GRU will decide if the word is emitted with or without misspellings. We don't apply step-wise attention for character-GRU, instead we initialize it with the correct context. The ideal candidate representing the context must combine information about: (a) the word obtained from $c_j^{(w)}, s_j^{(w)}$, (b) its character alignment with the input characters derived from character context vector $c_j^{(c)}$ with respect to the word-GRU's state and (c) perturbation embedded in $v_p$. This yields,",
"Thus, $\\tilde{s}_j^{(c)}$ is initialized to the character-GRU only for the first hidden state. With this mechanism, both word and character level information can be used to introduce necessary perturbations."
],
[
"The primary purpose of pretraining AEG is to enable our hybrid encoder-decoder to encode both character and word information from the input example and produce both word and character-level transformations in the form of paraphrases or misspellings. Though the pretraining helps us mitigate the cold-start issue, it does not guarantee that these perturbed texts will fool the target model. There are large number of valid perturbations that can be applied due to multiple ways of arranging text units to produce paraphrases or different misspellings. Thus, minimizing $J_{mle}$ is not sufficient to generate adversarial examples."
],
[
"In this paper, we use paraphrase datasets like PARANMT-50M corpusBIBREF33, Quora Question Pair dataset and Twitter URL paraphrasing corpus BIBREF34. These paraphrase datasets together contains text from various sources: Common Crawl, CzEng1.6, Europarl, News Commentary, Quora questions, and Twitter trending topic tweets. We do not use all the data for our pretraining. We randomly sample 5 million parallel texts and augment them using simple character-transformations (eg. random insertion, deletion or replacement) to words in the text. The number of words that undergo transformation is capped at 10% of the total number of words in the text. We further include examples which contain only character-transformations without paraphrasing the original input."
],
[
"AEG is pre-trained using teacher-forcing algorithm BIBREF35 on the dataset explained in Section SECREF3. Consider an input text: “movie was good” that needs to be decoded into the following target perturbed text: “film is gud”. The word “gud” might be out-of-vocabulary indicated by $<oov>$. Hence, we compute the loss incurred by word-GRU decoder, $J^{(w)}$, when predicting {“film”, “is”, “$<oov>$”} and loss incurred by character-GRU decoder, $J^{(c)}$, when predicting {`f', `i',`l', `m', `_'},{`i',`s','_'},{`g', `u',`d',`_'}. Therefore, the training objective in Section SECREF7 is modified into:"
],
[
"We fine-tune our model to fool a target classifier by learning a policy that maximizes a specific discrete metric formulated based on the constraints required to generate adversarial examples. In our work, we use the self-critical approach of Rennie et al. BIBREF36 as our policy gradient training algorithm."
],
[
"In SCST approach, the model learns to gather more rewards from its sampled sequences that bring higher rewards than its best greedy counterparts. First, we compute two sequences: (a) $y^{\\prime }$ sampled from the model's distribution $p(y^{\\prime }_j|y^{\\prime }_{<j},h)$ and (b) $\\hat{y}$ obtained by greedily decoding ($argmax$ predictions) from the distribution $p(\\hat{y}_j|\\hat{y}_{<j},h)$ Next, rewards $r(y^{\\prime }_j),r(\\hat{y}_j)$ are computed for both the sequences using a reward function $r(\\cdot )$, explained in Section SECREF26. We train the model by minimizing:",
"Here $r(\\hat{y})$ can be viewed as the baseline reward. This approach, therefore, explores different sequences that produce higher reward compared to the current best policy."
],
[
"The reward $r(\\hat{y})$ for the sequence generated is a weighted sum of different constraints required for generating adversarial examples. Since our model operates at word and character levels, we therefore compute three rewards: adversarial reward, semantic similarity and lexical similarity reward. The reward should be high when: (a) the generated sequence causes the target model to produce a low classification prediction probability for its ground truth category, (b) semantic similarity is preserved and (c) the changes made to the original text are minimal."
],
[
"Given a target model $T$, it takes a text sequence $y$ and outputs prediction probabilities $P$ across various categories of the target model. Given an input sample $(x, l)$, we compute a perturbation using our AEG model and produce a sequence $y$. We compute the adversarial reward as $R_{A}=(1-P_l)$, where the ground truth $l$ is an index to the list of categories and $P_l$ is the probability that the perturbed generated sequence $y$ belongs to target ground truth $l$. Since we want the target classifier to make mistakes, we promote it by rewarding higher when the sequences produce low target probabilities."
],
[
"Inspired by the work of Li et al. BIBREF37, we train a deep matching model that can represent the degree of match between two texts. We use character based biLSTM models with attention BIBREF38 to handle word and character level perturbations. The matching model will help us compute the the semantic similarity $R_S$ between the text generated and the original input text."
],
[
"Since our model functions at both character and word level, we compute the lexical similarity. The purpose of this reward is to keep the changes as minimal as possible to just fool the target classifier. Motivated by the recent work of Moon et al. BIBREF39, we pretrain a deep neural network to compute approximate Levenshtein distance $R_{L}$ composed of character based bi-LSTM model. We replicate that model by generating a large number of text with perturbations in the form of insertions, deletions or replacements. We also include words which are prominent nicknames, abbreviations or inconsistent notations to have more lexical similarity. This is generally not possible using direct Levenshtein distance computation. Once trained, it can produce a purely lexical embedding of the text without semantic allusion. This can be used to compute the lexical similarity between the generated text $y$ and the original input text $x$ for our purpose.",
"Finally, we combine all these three rewards using:",
"where $\\gamma _A, \\gamma _S, \\gamma _L$ are hyperparameters that can be modified depending upon the kind of textual generations expected from the model. The changes inflicted by different reward coefficients can be seen in Section SECREF44."
],
[
"We trained our models on 4 GPUs. The parameters of our hybrid encoder-decoder were uniformly initialized to $[-0.1, 0.1]$. The optimization algorithm used is Adam BIBREF40. The encoder word embedding matrices were initialized with 300-dimensional Glove vectors BIBREF41. During reinforcement training, we used plain stochastic gradient descent with a learning rate of 0.01. Using a held-out validation set, the hyper-parameters for our experiments are set as follows: $\\gamma _A=1, \\gamma _S=0.5, \\gamma _L=0.25$."
],
[
"In this section, we describe the evaluation setup used to measure the effectiveness of our model in generating adversarial examples. The success of our model lies in its ability to fool the target classifier. We pretrain our models with dataset that generates a number of character and word perturbations. We elaborate on the experimental setup and the results below."
],
[
"We conduct experiments on different datasets to verify if the accuracy of the deep learning models decrease when fed with the adversarial examples generated by our model. We use benchmark sentiment classification and news categorization datasets and the details are as follows:",
"Sentiment classification: We trained a word-based convolutional model (CNN-Word) BIBREF11 on IMDB sentiment dataset . The dataset contains 50k movie reviews in total which are labeled as positive or negative. The trained model achieves a test accuracy of 89.95% which is relatively close to the state-of-the-art results on this dataset.",
"News categorization: We perform our experiments on AG's news corpus with a character-based convolutional model (CNN-Char) BIBREF12. The news corpus contains titles and descriptions of various news articles along with their respective categories. There are four categories: World, Sports, Business and Sci/Tech. The trained CNN-Char model achieves a test accuracy of 89.11%.",
"Table TABREF29 summarizes the data and models used in our experiments. We compare our proposed model with the following black-box non-targeted attacks:",
"Random: We randomly select a word in the text and introduce some perturbation to that word in the form of a character replacement or synonymous word replacement. No specific strategy to identify importance of words.",
"NMT-BT: We generate paraphrases of the sentences of the text using a back-translation approach BIBREF23. We used pretrained English$\\leftrightarrow $German translation models to obtain back-translations of input examples.",
"DeepWordBug BIBREF24: A scoring function is used to determine the important tokens to change. The tokens are then modified to evade a target model.",
"No-RL: We use our pretrained model without the reinforcement learning objective.",
"The performance of these methods are measured by the percentage fall in accuracy of these models on the generated adversarial texts. Higher the percentage dip in the accuracy of the target classifier, more effective is our model."
],
[
"We analyze the effectiveness of our approach by comparing the results from using two different baselines against character and word-based models trained on different datasets. Table TABREF40 demonstrates the capability of our model. Without the reinforcement learning objective, the No-RL model performs better than the back-translation approach(NMT-BT). The improvement can be attributed to the word and character perturbations introduced by our hybrid encoder-decoder model as opposed to only paraphrases in the former model. Our complete AEG model outperforms all the other models with significant drop in accuracy. For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%.",
"It is important to note that our model is able to expose the weaknesses of the target model irrespective of the nature of the model (either word or character level). It is interesting that even simple lexical substitutions and paraphrases can break such models on both datasets we tested. Across different models, the character-based models are less susceptible to adversarial attacks compared to word-based models as they are able to handle misspellings and provide better generalizations."
],
[
"We also evaluated our model based on human judgments. We conducted an experiment where the workers were presented with randomly sampled 100 adversarial examples generated by our model which were successful in fooling the target classifier. The examples were shuffled to mitigate ordering bias, and every example was annotated by three workers. The workers were asked to label the sentiment of the sampled adversarial example. For every adversarial example shown, we also showed the original text and asked them to rate their similarity on a scale from 0 (Very Different) to 3 (Very Similar). We found that the perturbations produced by our model do not affect the human judgments significantly as $94.6\\%$ of the human annotations matched with the ground-truth label of the original text. The average similarity rating of $1.916$ also indicated that the generated adversarial sequences are semantics-preserving."
],
[
"In this section, we make different modifications to our encoder and decoder to weigh the importance of these techniques: (a) No perturbation vector (No Pert) and finally (b) a simple character based decoder (Char-dec) but involves perturbation vector. Table TABREF40 shows that the absence of hybrid decoder leads to a significant drop in the performance of our model. The main reason we believe is that hybrid decoder is able to make targeted attacks on specific words which otherwise is lost while generating text using a pure-character based decoder. In the second case case, the most important words associated with the prediction of the target model are identified by the summary vector. When the perturbation vector is used, it carries forward this knowledge and decides if a perturbation should be performed at this step or not. This can be verified even in Figure FIGREF43, where the regions of high attention get perturbed in the text generated."
],
[
"We qualitatively analyze the results by visualizing the attention scores and the perturbations introduces by our model. We further evaluate the importance of hyperparameters $\\gamma _{(.)}$ in the reward function. We set only one of the hyperparameters closer to 1 and set the remaining closer to zero to see how it affects the text generation. The results can be seen in Figure FIGREF43. Based on a subjective qualitative evaluation, we make the following observations:",
"Promisingly, it identifies the most important words that contribute to particular categorization. The model introduces misspellings or word replacements without significant change in semantics of the text.",
"When the coefficient associated only with adversarial reward goes to 1, it begins to slowly deviate though not completely. This is motivated by the initial pretraining step on paraphrases and perturbations."
],
[
"In this work, we have introduced a $AEG$, a model capable of generating adversarial text examples to fool the black-box text classification models. Since we do not have access to gradients or parameters of the target model, we modelled our problem using a reinforcement learning based approach. In order to effectively baseline the REINFORCE algorithm for policy-gradients, we implemented a self-critical approach that normalizes the rewards obtained by sampled sentences with the rewards obtained by the model under test-time inference algorithm. By generating adversarial examples for target word and character-based models trained on IMDB reviews and AG's news dataset, we find that our model is capable of generating semantics-preserving perturbations that leads to steep decrease in accuracy of those target models. We conducted ablation studies to find the importance of individual components of our system. Extremely low values of the certain reward coefficient constricts the quantitative performance of the model can also lead to semantic divergence. Therefore, the choice of a particular value for this model should be motivated by the demands of the context in which it is applied. One of the main challenges of such approaches lies in the ability to produce more synthetic data to train the generator model in the distribution of the target model's training data. This can significantly improve the performance of our model. We hope that our method motivates a more nuanced exploration into generating adversarial examples and adversarial training for building robust classification models."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Attack Strategy",
"Proposed Attack Strategy ::: Background and Notations",
"Proposed Attack Strategy ::: Background and Notations ::: Encoder",
"Proposed Attack Strategy ::: Background and Notations ::: Decoder",
"Adversarial Examples Generator (AEG) Architecture",
"Adversarial Examples Generator (AEG) Architecture ::: Encoder",
"Adversarial Examples Generator (AEG) Architecture ::: Decoder",
"Training ::: Supervised Pretraining with Teacher Forcing",
"Training ::: Supervised Pretraining with Teacher Forcing ::: Dataset Collection",
"Training ::: Supervised Pretraining with Teacher Forcing ::: Training Objective",
"Training ::: Training with Reinforcement learning",
"Training ::: Training with Reinforcement learning ::: Self-critical sequence training (SCST)",
"Training ::: Training with Reinforcement learning ::: Rewards",
"Training ::: Training with Reinforcement learning ::: Rewards ::: Adversarial Reward",
"Training ::: Training with Reinforcement learning ::: Rewards ::: Semantic Similarity",
"Training ::: Training with Reinforcement learning ::: Rewards ::: Lexical Similarity",
"Training ::: Training Details",
"Experiments",
"Experiments ::: Setup",
"Experiments ::: Quantitative Analysis",
"Experiments ::: Human Evaluation",
"Experiments ::: Ablation Studies",
"Experiments ::: Qualitative Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"935b1eb8d4dbb884f9eb9f4320ac506729d5fe6d",
"b8aeeb8421514e220b30a7453813807eaa4e37bf"
],
"answer": [
{
"evidence": [
"We also evaluated our model based on human judgments. We conducted an experiment where the workers were presented with randomly sampled 100 adversarial examples generated by our model which were successful in fooling the target classifier. The examples were shuffled to mitigate ordering bias, and every example was annotated by three workers. The workers were asked to label the sentiment of the sampled adversarial example. For every adversarial example shown, we also showed the original text and asked them to rate their similarity on a scale from 0 (Very Different) to 3 (Very Similar). We found that the perturbations produced by our model do not affect the human judgments significantly as $94.6\\%$ of the human annotations matched with the ground-truth label of the original text. The average similarity rating of $1.916$ also indicated that the generated adversarial sequences are semantics-preserving."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We also evaluated our model based on human judgments. We conducted an experiment where the workers were presented with randomly sampled 100 adversarial examples generated by our model which were successful in fooling the target classifier."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We also evaluated our model based on human judgments. We conducted an experiment where the workers were presented with randomly sampled 100 adversarial examples generated by our model which were successful in fooling the target classifier. The examples were shuffled to mitigate ordering bias, and every example was annotated by three workers. The workers were asked to label the sentiment of the sampled adversarial example. For every adversarial example shown, we also showed the original text and asked them to rate their similarity on a scale from 0 (Very Different) to 3 (Very Similar). We found that the perturbations produced by our model do not affect the human judgments significantly as $94.6\\%$ of the human annotations matched with the ground-truth label of the original text. The average similarity rating of $1.916$ also indicated that the generated adversarial sequences are semantics-preserving."
],
"extractive_spans": [],
"free_form_answer": "Only 100 successfully adversarial examples were manually checked, not all of them.",
"highlighted_evidence": [
"We also evaluated our model based on human judgments. We conducted an experiment where the workers were presented with randomly sampled 100 adversarial examples generated by our model which were successful in fooling the target classifier. The examples were shuffled to mitigate ordering bias, and every example was annotated by three workers. The workers were asked to label the sentiment of the sampled adversarial example."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"14f503fec766a407fe02bc2cded83d8f96ca9739",
"de7ae24a206d7a962f154d4362a678be824e9f68"
],
"answer": [
{
"evidence": [
"Alzantot et al. BIBREF20 proposed a black-box targeted attack using a population-based optimization via genetic algorithm BIBREF21. The perturbation procedure consists of random selection of words, finding their nearest neighbours, ranking and substitution to maximize the probability of target category. In this method, random word selection in the sequence to substitute were full of uncertainties and might be meaningless for the target label when changed. Since our model focuses on black-box non-targeted attack using an encoder-decoder approach, our work is closely related to the following techniques in the literature: Wong (2017) BIBREF22, Iyyer et al. BIBREF23 and Gao et al. BIBREF24. Wong (2017) BIBREF22 proposed a GAN-inspired method to generate adversarial text examples targeting black-box classifiers. However, this approach was restricted to binary text classifiers. Iyyer et al. BIBREF23 crafted adversarial examples using their proposed Syntactically Controlled Paraphrase Networks (SCPNs). They designed this model for generating syntactically adversarial examples without compromising on the quality of the input semantics. The general process is based on the encoder-decoder architecture of SCPN. Gao et al. BIBREF24 implemented an algorithm called DeepWordBug that generates small text perturbations in a black box setting forcing the deep learning model to make mistakes. DeepWordBug used a scoring function to determine important tokens and then applied character-level transformations to those tokens. Though the algorithm successfully generates adversarial examples by introducing character-level attacks, most of the introduced perturbations are constricted to misspellings. The semantics of the text may be irreversibly changed if excessive misspellings are introduced to fool the target classifier. While SCPNs and DeepWordBug primary rely only on paraphrases and character transformations respectively to fool the classifier, our model uses a hybrid word-character encoder-decoder approach to introduce both paraphrases and character-level perturbations as a part of our attack strategy. Our attacks can be a test of how robust the text classification models are to word and character-level perturbations.",
"Let us consider a target model $T$ and $(x,l)$ refers to the samples from the dataset. Given an instance $x$, the goal of the adversary is to generate adversarial examples $x^{\\prime }$ such that $T(x^{\\prime }) \\ne l$, where $l$ denotes the true label i.e take one of the $K$ classes of the target classification model. The changes made to $x$ to get $x^{\\prime }$ are called perturbations. We would like to have $x^{\\prime }$ close to the original instance $x$. In a black box setting, we do not have knowledge about the internals of the target model or its training data. Previous work by Papernot et al. BIBREF14 train a separate substitute classifier such that it can mimic the decision boundaries of the target classifier. The substitute classifier is then used to craft adversarial examples. While these techniques have been applied for image classification models, such methods have not been explored extensively for text.",
"We implement both the substitute network training and adversarial example generation using an encoder-decoder architecture called Adversarial Examples Generator (AEG). The encoder extracts the character and word information from the input text and produces hidden representations of words considering its sequence context information. A substitute network is not implemented separately but applied using an attention mechanism to weigh the encoded hidden states based on their relevance to making predictions closer to target model outputs. The attention scores provide certain level of interpretability to the model as the regions of text that need to perturbed can be identified and visualized. The decoder uses the attention scores obtained from the substitute network, combines it with decoder state information to decide if perturbation is required at this state or not and finally emits the text unit (a text unit may refer to a word or character). Inspired by a work by Luong et al. BIBREF25, the decoder is a word and character-level recurrent network employed to generate adversarial examples. Before the substitute network is trained, we pretrain our encoder-decoder model on common misspellings and paraphrase datasets to empower the model to produce character and word perturbations in the form of misspellings or paraphrases. For training substitute network and generation of adversarial examples, we randomly draw data that is disjoint from the training data of the black-box model since we assume the adversaries have no prior knowledge about the training data or the model. Specifically, we consider attacking a target classifier by generating adversarial examples based on unseen input examples. This is done by dividing the dataset into training, validation and test using 60-30-10 ratio. The training data is used by the target model, while the unseen validation samples are used with necessary data augmentation for our AEG model. We further improve our model by using a self-critical approach to finally generate better adversarial examples. The rewards are formulated based on the following goals: (a) fool the target classifier, (b) minimize the number of perturbations and (c) preserve the semantics of the text. In the following sections, we explain the encoder-decoder model and then describe the reinforcement learning framing towards generation of adversarial examples.",
"The primary purpose of pretraining AEG is to enable our hybrid encoder-decoder to encode both character and word information from the input example and produce both word and character-level transformations in the form of paraphrases or misspellings. Though the pretraining helps us mitigate the cold-start issue, it does not guarantee that these perturbed texts will fool the target model. There are large number of valid perturbations that can be applied due to multiple ways of arranging text units to produce paraphrases or different misspellings. Thus, minimizing $J_{mle}$ is not sufficient to generate adversarial examples.",
"The reward $r(\\hat{y})$ for the sequence generated is a weighted sum of different constraints required for generating adversarial examples. Since our model operates at word and character levels, we therefore compute three rewards: adversarial reward, semantic similarity and lexical similarity reward. The reward should be high when: (a) the generated sequence causes the target model to produce a low classification prediction probability for its ground truth category, (b) semantic similarity is preserved and (c) the changes made to the original text are minimal.",
"Inspired by the work of Li et al. BIBREF37, we train a deep matching model that can represent the degree of match between two texts. We use character based biLSTM models with attention BIBREF38 to handle word and character level perturbations. The matching model will help us compute the the semantic similarity $R_S$ between the text generated and the original input text.",
"Since our model functions at both character and word level, we compute the lexical similarity. The purpose of this reward is to keep the changes as minimal as possible to just fool the target classifier. Motivated by the recent work of Moon et al. BIBREF39, we pretrain a deep neural network to compute approximate Levenshtein distance $R_{L}$ composed of character based bi-LSTM model. We replicate that model by generating a large number of text with perturbations in the form of insertions, deletions or replacements. We also include words which are prominent nicknames, abbreviations or inconsistent notations to have more lexical similarity. This is generally not possible using direct Levenshtein distance computation. Once trained, it can produce a purely lexical embedding of the text without semantic allusion. This can be used to compute the lexical similarity between the generated text $y$ and the original input text $x$ for our purpose.",
"Table TABREF29 summarizes the data and models used in our experiments. We compare our proposed model with the following black-box non-targeted attacks:",
"Random: We randomly select a word in the text and introduce some perturbation to that word in the form of a character replacement or synonymous word replacement. No specific strategy to identify importance of words.",
"NMT-BT: We generate paraphrases of the sentences of the text using a back-translation approach BIBREF23. We used pretrained English$\\leftrightarrow $German translation models to obtain back-translations of input examples.",
"DeepWordBug BIBREF24: A scoring function is used to determine the important tokens to change. The tokens are then modified to evade a target model.",
"No-RL: We use our pretrained model without the reinforcement learning objective.",
"Given different settings of the adversary, there are other works that have designed attacks in “gray-box” settings BIBREF8, BIBREF9, BIBREF10. However, the definitions of “gray-box” attacks are quite different in each of these approaches. In this paper, we focus on “black-box” setting where we assume that the adversary possesses a limited set of labeled data, which is different from the target's training data, and also has an oracle access to the system, i.e., one can query the target classifier with any input and get its corresponding predictions. We propose an effective technique to generate adversarial examples in a black-box setting. We develop an Adversarial Example Generator (AEG) model that uses a reinforcement learning framing to generate adversarial examples. We evaluate our models using a word-based BIBREF11 and character-based BIBREF12 text classification model on benchmark classification tasks: sentiment classification and news categorization. The adversarial sequences generated are able to effectively fool the classifiers without changing the semantics of the text. Our contributions are as follows:"
],
"extractive_spans": [],
"free_form_answer": "While the models aim to generate examples which preserve the semantics of the text with minimal perturbations, the Random model randomly replaces a character, which may not preserve the semantics. ",
"highlighted_evidence": [
"While SCPNs and DeepWordBug primary rely only on paraphrases and character transformations respectively to fool the classifier, our model uses a hybrid word-character encoder-decoder approach to introduce both paraphrases and character-level perturbations as a part of our attack strategy. ",
"Given an instance $x$, the goal of the adversary is to generate adversarial examples $x^{\\prime }$ such that $T(x^{\\prime }) \\ne l$, where $l$ denotes the true label i.e take one of the $K$ classes of the target classification model. The changes made to $x$ to get $x^{\\prime }$ are called perturbations. We would like to have $x^{\\prime }$ close to the original instance $x$.",
"We further improve our model by using a self-critical approach to finally generate better adversarial examples. The rewards are formulated based on the following goals: (a) fool the target classifier, (b) minimize the number of perturbations and (c) preserve the semantics of the text.",
"The primary purpose of pretraining AEG is to enable our hybrid encoder-decoder to encode both character and word information from the input example and produce both word and character-level transformations in the form of paraphrases or misspellings.",
"The reward $r(\\hat{y})$ for the sequence generated is a weighted sum of different constraints required for generating adversarial examples. Since our model operates at word and character levels, we therefore compute three rewards: adversarial reward, semantic similarity and lexical similarity reward. The reward should be high when: (a) the generated sequence causes the target model to produce a low classification prediction probability for its ground truth category, (b) semantic similarity is preserved and (c) the changes made to the original text are minimal.",
"Inspired by the work of Li et al. BIBREF37, we train a deep matching model that can represent the degree of match between two texts. We use character based biLSTM models with attention BIBREF38 to handle word and character level perturbations. The matching model will help us compute the the semantic similarity $R_S$ between the text generated and the original input text.",
"The purpose of this reward is to keep the changes as minimal as possible to just fool the target classifier. Motivated by the recent work of Moon et al. BIBREF39, we pretrain a deep neural network to compute approximate Levenshtein distance $R_{L}$ composed of character based bi-LSTM model. We replicate that model by generating a large number of text with perturbations in the form of insertions, deletions or replacements. We also include words which are prominent nicknames, abbreviations or inconsistent notations to have more lexical similarity.",
"Table TABREF29 summarizes the data and models used in our experiments. We compare our proposed model with the following black-box non-targeted attacks:\n\nRandom: We randomly select a word in the text and introduce some perturbation to that word in the form of a character replacement or synonymous word replacement. No specific strategy to identify importance of words.\n\nNMT-BT: We generate paraphrases of the sentences of the text using a back-translation approach BIBREF23. We used pretrained English$\\leftrightarrow $German translation models to obtain back-translations of input examples.\n\nDeepWordBug BIBREF24: A scoring function is used to determine the important tokens to change. The tokens are then modified to evade a target model.\n\nNo-RL: We use our pretrained model without the reinforcement learning objective.",
" We develop an Adversarial Example Generator (AEG) model that uses a reinforcement learning framing to generate adversarial examples. We evaluate our models using a word-based BIBREF11 and character-based BIBREF12 text classification model on benchmark classification tasks: sentiment classification and news categorization. The adversarial sequences generated are able to effectively fool the classifiers without changing the semantics of the text."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this work, we have introduced a $AEG$, a model capable of generating adversarial text examples to fool the black-box text classification models. Since we do not have access to gradients or parameters of the target model, we modelled our problem using a reinforcement learning based approach. In order to effectively baseline the REINFORCE algorithm for policy-gradients, we implemented a self-critical approach that normalizes the rewards obtained by sampled sentences with the rewards obtained by the model under test-time inference algorithm. By generating adversarial examples for target word and character-based models trained on IMDB reviews and AG's news dataset, we find that our model is capable of generating semantics-preserving perturbations that leads to steep decrease in accuracy of those target models. We conducted ablation studies to find the importance of individual components of our system. Extremely low values of the certain reward coefficient constricts the quantitative performance of the model can also lead to semantic divergence. Therefore, the choice of a particular value for this model should be motivated by the demands of the context in which it is applied. One of the main challenges of such approaches lies in the ability to produce more synthetic data to train the generator model in the distribution of the target model's training data. This can significantly improve the performance of our model. We hope that our method motivates a more nuanced exploration into generating adversarial examples and adversarial training for building robust classification models."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Extremely low values of the certain reward coefficient constricts the quantitative performance of the model can also lead to semantic divergence."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1bbefe218d9e505c1b7d369ad4e275d307eb6cd3",
"7308aa86291a15318fa99a5d9d86362b4f391713"
],
"answer": [
{
"evidence": [
"We analyze the effectiveness of our approach by comparing the results from using two different baselines against character and word-based models trained on different datasets. Table TABREF40 demonstrates the capability of our model. Without the reinforcement learning objective, the No-RL model performs better than the back-translation approach(NMT-BT). The improvement can be attributed to the word and character perturbations introduced by our hybrid encoder-decoder model as opposed to only paraphrases in the former model. Our complete AEG model outperforms all the other models with significant drop in accuracy. For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%.",
"FLOAT SELECTED: Table 2. Left: Performance of our AEG model on IMDB and AG’s News dataset using word and character based CNN models respectively. Results indicate the percentage dip in the accuracy by using the corresponding attacking model over the original accuracy. Right: Performance of different variants of our model.",
"FLOAT SELECTED: Table 2. Left: Performance of our AEG model on IMDB and AG’s News dataset using word and character based CNN models respectively. Results indicate the percentage dip in the accuracy by using the corresponding attacking model over the original accuracy. Right: Performance of different variants of our model."
],
"extractive_spans": [],
"free_form_answer": "Authors best attacking model resulted in dip in the accuracy of CNN-Word (IMDB) by 79.43% and CNN-Char (AG's News) model by 72.16%",
"highlighted_evidence": [
"Table TABREF40 demonstrates the capability of our model.",
"FLOAT SELECTED: Table 2. Left: Performance of our AEG model on IMDB and AG’s News dataset using word and character based CNN models respectively. Results indicate the percentage dip in the accuracy by using the corresponding attacking model over the original accuracy. Right: Performance of different variants of our model.",
"FLOAT SELECTED: Table 2. Left: Performance of our AEG model on IMDB and AG’s News dataset using word and character based CNN models respectively. Results indicate the percentage dip in the accuracy by using the corresponding attacking model over the original accuracy. Right: Performance of different variants of our model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The performance of these methods are measured by the percentage fall in accuracy of these models on the generated adversarial texts. Higher the percentage dip in the accuracy of the target classifier, more effective is our model.",
"We analyze the effectiveness of our approach by comparing the results from using two different baselines against character and word-based models trained on different datasets. Table TABREF40 demonstrates the capability of our model. Without the reinforcement learning objective, the No-RL model performs better than the back-translation approach(NMT-BT). The improvement can be attributed to the word and character perturbations introduced by our hybrid encoder-decoder model as opposed to only paraphrases in the former model. Our complete AEG model outperforms all the other models with significant drop in accuracy. For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%."
],
"extractive_spans": [
"Our complete AEG model outperforms all the other models with significant drop in accuracy. For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%."
],
"free_form_answer": "",
"highlighted_evidence": [
"The performance of these methods are measured by the percentage fall in accuracy of these models on the generated adversarial texts. Higher the percentage dip in the accuracy of the target classifier, more effective is our model.",
"Table TABREF40 demonstrates the capability of our model. Without the reinforcement learning objective, the No-RL model performs better than the back-translation approach(NMT-BT). The improvement can be attributed to the word and character perturbations introduced by our hybrid encoder-decoder model as opposed to only paraphrases in the former model. Our complete AEG model outperforms all the other models with significant drop in accuracy. For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%.",
"Table TABREF40 demonstrates the capability of our model. Without the reinforcement learning objective, the No-RL model performs better than the back-translation approach(NMT-BT). The improvement can be attributed to the word and character perturbations introduced by our hybrid encoder-decoder model as opposed to only paraphrases in the former model. Our complete AEG model outperforms all the other models with significant drop in accuracy. For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"85debf590311306d2ade18b271f4d23b29b7e151",
"fd2f80879c4035d3155c40c51e0e771486a1ff30"
],
"answer": [
{
"evidence": [
"News categorization: We perform our experiments on AG's news corpus with a character-based convolutional model (CNN-Char) BIBREF12. The news corpus contains titles and descriptions of various news articles along with their respective categories. There are four categories: World, Sports, Business and Sci/Tech. The trained CNN-Char model achieves a test accuracy of 89.11%."
],
"extractive_spans": [
" character-based convolutional model (CNN-Char)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform our experiments on AG's news corpus with a character-based convolutional model (CNN-Char) BIBREF12."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct experiments on different datasets to verify if the accuracy of the deep learning models decrease when fed with the adversarial examples generated by our model. We use benchmark sentiment classification and news categorization datasets and the details are as follows:",
"Sentiment classification: We trained a word-based convolutional model (CNN-Word) BIBREF11 on IMDB sentiment dataset . The dataset contains 50k movie reviews in total which are labeled as positive or negative. The trained model achieves a test accuracy of 89.95% which is relatively close to the state-of-the-art results on this dataset.",
"News categorization: We perform our experiments on AG's news corpus with a character-based convolutional model (CNN-Char) BIBREF12. The news corpus contains titles and descriptions of various news articles along with their respective categories. There are four categories: World, Sports, Business and Sci/Tech. The trained CNN-Char model achieves a test accuracy of 89.11%.",
"We analyze the effectiveness of our approach by comparing the results from using two different baselines against character and word-based models trained on different datasets. Table TABREF40 demonstrates the capability of our model. Without the reinforcement learning objective, the No-RL model performs better than the back-translation approach(NMT-BT). The improvement can be attributed to the word and character perturbations introduced by our hybrid encoder-decoder model as opposed to only paraphrases in the former model. Our complete AEG model outperforms all the other models with significant drop in accuracy. For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%."
],
"extractive_spans": [],
"free_form_answer": "A word-based convolutional model (CNN-Word) and a character-based convolutional model (CNN-Char)",
"highlighted_evidence": [
"We conduct experiments on different datasets to verify if the accuracy of the deep learning models decrease when fed with the adversarial examples generated by our model. We use benchmark sentiment classification and news categorization datasets and the details are as follows:\n\nSentiment classification: We trained a word-based convolutional model (CNN-Word) BIBREF11 on IMDB sentiment dataset . The dataset contains 50k movie reviews in total which are labeled as positive or negative. The trained model achieves a test accuracy of 89.95% which is relatively close to the state-of-the-art results on this dataset.\n\nNews categorization: We perform our experiments on AG's news corpus with a character-based convolutional model (CNN-Char) BIBREF12. The news corpus contains titles and descriptions of various news articles along with their respective categories. There are four categories: World, Sports, Business and Sci/Tech. The trained CNN-Char model achieves a test accuracy of 89.11%.",
"For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"00878ca9436142500a44d543ca985239dc306461",
"bd0a9feea80c9e6b572bfacf8d7e2699dc112270"
],
"answer": [
{
"evidence": [
"Sentiment classification: We trained a word-based convolutional model (CNN-Word) BIBREF11 on IMDB sentiment dataset . The dataset contains 50k movie reviews in total which are labeled as positive or negative. The trained model achieves a test accuracy of 89.95% which is relatively close to the state-of-the-art results on this dataset.",
"We analyze the effectiveness of our approach by comparing the results from using two different baselines against character and word-based models trained on different datasets. Table TABREF40 demonstrates the capability of our model. Without the reinforcement learning objective, the No-RL model performs better than the back-translation approach(NMT-BT). The improvement can be attributed to the word and character perturbations introduced by our hybrid encoder-decoder model as opposed to only paraphrases in the former model. Our complete AEG model outperforms all the other models with significant drop in accuracy. For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%."
],
"extractive_spans": [],
"free_form_answer": "A word-based convolutional neural network (CNN-Word)",
"highlighted_evidence": [
"Sentiment classification: We trained a word-based convolutional model (CNN-Word) BIBREF11 on IMDB sentiment dataset . The dataset contains 50k movie reviews in total which are labeled as positive or negative. The trained model achieves a test accuracy of 89.95% which is relatively close to the state-of-the-art results on this dataset.",
"For the CNN-Word, DeepWordBug decreases the accuracy from 89.95% to 28.13% while AEG model further reduces it to 18.5%."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Sentiment classification: We trained a word-based convolutional model (CNN-Word) BIBREF11 on IMDB sentiment dataset . The dataset contains 50k movie reviews in total which are labeled as positive or negative. The trained model achieves a test accuracy of 89.95% which is relatively close to the state-of-the-art results on this dataset."
],
"extractive_spans": [
"word-based convolutional model (CNN-Word)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained a word-based convolutional model (CNN-Word) BIBREF11 on IMDB sentiment dataset ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9ff479e616c70f79cad4df9e07615e66d3363055",
"f5fa5784a455d6a6618f2d07c313846c0f9c00f7"
],
"answer": [
{
"evidence": [
"Training ::: Supervised Pretraining with Teacher Forcing",
"The primary purpose of pretraining AEG is to enable our hybrid encoder-decoder to encode both character and word information from the input example and produce both word and character-level transformations in the form of paraphrases or misspellings. Though the pretraining helps us mitigate the cold-start issue, it does not guarantee that these perturbed texts will fool the target model. There are large number of valid perturbations that can be applied due to multiple ways of arranging text units to produce paraphrases or different misspellings. Thus, minimizing $J_{mle}$ is not sufficient to generate adversarial examples.",
"Training ::: Training with Reinforcement learning",
"We fine-tune our model to fool a target classifier by learning a policy that maximizes a specific discrete metric formulated based on the constraints required to generate adversarial examples. In our work, we use the self-critical approach of Rennie et al. BIBREF36 as our policy gradient training algorithm."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Training ::: Supervised Pretraining with Teacher Forcing\nThe primary purpose of pretraining AEG is to enable our hybrid encoder-decoder to encode both character and word information from the input example and produce both word and character-level transformations in the form of paraphrases or misspellings. ",
"Training ::: Training with Reinforcement learning\nWe fine-tune our model to fool a target classifier by learning a policy that maximizes a specific discrete metric formulated based on the constraints required to generate adversarial examples"
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Since our model functions at both character and word level, we compute the lexical similarity. The purpose of this reward is to keep the changes as minimal as possible to just fool the target classifier. Motivated by the recent work of Moon et al. BIBREF39, we pretrain a deep neural network to compute approximate Levenshtein distance $R_{L}$ composed of character based bi-LSTM model. We replicate that model by generating a large number of text with perturbations in the form of insertions, deletions or replacements. We also include words which are prominent nicknames, abbreviations or inconsistent notations to have more lexical similarity. This is generally not possible using direct Levenshtein distance computation. Once trained, it can produce a purely lexical embedding of the text without semantic allusion. This can be used to compute the lexical similarity between the generated text $y$ and the original input text $x$ for our purpose."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Motivated by the recent work of Moon et al. BIBREF39, we pretrain a deep neural network to compute approximate Levenshtein distance $R_{L}$ composed of character based bi-LSTM model."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7a17e1eed1c90f3f3d19e887da9db0bceb6aa9a4",
"b6759d3281fa2613befa19cd157740a276be5dac"
],
"answer": [
{
"evidence": [
"Proposed Attack Strategy",
"Let us consider a target model $T$ and $(x,l)$ refers to the samples from the dataset. Given an instance $x$, the goal of the adversary is to generate adversarial examples $x^{\\prime }$ such that $T(x^{\\prime }) \\ne l$, where $l$ denotes the true label i.e take one of the $K$ classes of the target classification model. The changes made to $x$ to get $x^{\\prime }$ are called perturbations. We would like to have $x^{\\prime }$ close to the original instance $x$. In a black box setting, we do not have knowledge about the internals of the target model or its training data. Previous work by Papernot et al. BIBREF14 train a separate substitute classifier such that it can mimic the decision boundaries of the target classifier. The substitute classifier is then used to craft adversarial examples. While these techniques have been applied for image classification models, such methods have not been explored extensively for text.",
"We implement both the substitute network training and adversarial example generation using an encoder-decoder architecture called Adversarial Examples Generator (AEG). The encoder extracts the character and word information from the input text and produces hidden representations of words considering its sequence context information. A substitute network is not implemented separately but applied using an attention mechanism to weigh the encoded hidden states based on their relevance to making predictions closer to target model outputs. The attention scores provide certain level of interpretability to the model as the regions of text that need to perturbed can be identified and visualized. The decoder uses the attention scores obtained from the substitute network, combines it with decoder state information to decide if perturbation is required at this state or not and finally emits the text unit (a text unit may refer to a word or character). Inspired by a work by Luong et al. BIBREF25, the decoder is a word and character-level recurrent network employed to generate adversarial examples. Before the substitute network is trained, we pretrain our encoder-decoder model on common misspellings and paraphrase datasets to empower the model to produce character and word perturbations in the form of misspellings or paraphrases. For training substitute network and generation of adversarial examples, we randomly draw data that is disjoint from the training data of the black-box model since we assume the adversaries have no prior knowledge about the training data or the model. Specifically, we consider attacking a target classifier by generating adversarial examples based on unseen input examples. This is done by dividing the dataset into training, validation and test using 60-30-10 ratio. The training data is used by the target model, while the unseen validation samples are used with necessary data augmentation for our AEG model. We further improve our model by using a self-critical approach to finally generate better adversarial examples. The rewards are formulated based on the following goals: (a) fool the target classifier, (b) minimize the number of perturbations and (c) preserve the semantics of the text. In the following sections, we explain the encoder-decoder model and then describe the reinforcement learning framing towards generation of adversarial examples.",
"Training ::: Training with Reinforcement learning",
"We fine-tune our model to fool a target classifier by learning a policy that maximizes a specific discrete metric formulated based on the constraints required to generate adversarial examples. In our work, we use the self-critical approach of Rennie et al. BIBREF36 as our policy gradient training algorithm.",
"Training ::: Training with Reinforcement learning ::: Self-critical sequence training (SCST)",
"In SCST approach, the model learns to gather more rewards from its sampled sequences that bring higher rewards than its best greedy counterparts. First, we compute two sequences: (a) $y^{\\prime }$ sampled from the model's distribution $p(y^{\\prime }_j|y^{\\prime }_{<j},h)$ and (b) $\\hat{y}$ obtained by greedily decoding ($argmax$ predictions) from the distribution $p(\\hat{y}_j|\\hat{y}_{<j},h)$ Next, rewards $r(y^{\\prime }_j),r(\\hat{y}_j)$ are computed for both the sequences using a reward function $r(\\cdot )$, explained in Section SECREF26. We train the model by minimizing:",
"Here $r(\\hat{y})$ can be viewed as the baseline reward. This approach, therefore, explores different sequences that produce higher reward compared to the current best policy."
],
"extractive_spans": [
"Training ::: Training with Reinforcement learning\nWe fine-tune our model to fool a target classifier by learning a policy that maximizes a specific discrete metric formulated based on the constraints required to generate adversarial examples. In our work, we use the self-critical approach of Rennie et al. BIBREF36 as our policy gradient training algorithm.\n\nTraining ::: Training with Reinforcement learning ::: Self-critical sequence training (SCST)\nIn SCST approach, the model learns to gather more rewards from its sampled sequences that bring higher rewards than its best greedy counterparts. First, we compute two sequences: (a) $y^{\\prime }$ sampled from the model's distribution $p(y^{\\prime }_j|y^{\\prime }_{"
],
"free_form_answer": "",
"highlighted_evidence": [
"Proposed Attack Strategy\nLet us consider a target model $T$ and $(x,l)$ refers to the samples from the dataset. Given an instance $x$, the goal of the adversary is to generate adversarial examples $x^{\\prime }$ such that $T(x^{\\prime }) \\ne l$, where $l$ denotes the true label i.e take one of the $K$ classes of the target classification model. The changes made to $x$ to get $x^{\\prime }$ are called perturbations. We would like to have $x^{\\prime }$ close to the original instance $x$. In a black box setting, we do not have knowledge about the internals of the target model or its training data. Previous work by Papernot et al. BIBREF14 train a separate substitute classifier such that it can mimic the decision boundaries of the target classifier. The substitute classifier is then used to craft adversarial examples. While these techniques have been applied for image classification models, such methods have not been explored extensively for text.\n\nWe implement both the substitute network training and adversarial example generation using an encoder-decoder architecture called Adversarial Examples Generator (AEG). The encoder extracts the character and word information from the input text and produces hidden representations of words considering its sequence context information. A substitute network is not implemented separately but applied using an attention mechanism to weigh the encoded hidden states based on their relevance to making predictions closer to target model outputs. The attention scores provide certain level of interpretability to the model as the regions of text that need to perturbed can be identified and visualized. The decoder uses the attention scores obtained from the substitute network, combines it with decoder state information to decide if perturbation is required at this state or not and finally emits the text unit (a text unit may refer to a word or character). Inspired by a work by Luong et al. BIBREF25, the decoder is a word and character-level recurrent network employed to generate adversarial examples. Before the substitute network is trained, we pretrain our encoder-decoder model on common misspellings and paraphrase datasets to empower the model to produce character and word perturbations in the form of misspellings or paraphrases. For training substitute network and generation of adversarial examples, we randomly draw data that is disjoint from the training data of the black-box model since we assume the adversaries have no prior knowledge about the training data or the model. Specifically, we consider attacking a target classifier by generating adversarial examples based on unseen input examples. This is done by dividing the dataset into training, validation and test using 60-30-10 ratio. The training data is used by the target model, while the unseen validation samples are used with necessary data augmentation for our AEG model. We further improve our model by using a self-critical approach to finally generate better adversarial examples. The rewards are formulated based on the following goals: (a) fool the target classifier, (b) minimize the number of perturbations and (c) preserve the semantics of the text. ",
"Training ::: Training with Reinforcement learning\nWe fine-tune our model to fool a target classifier by learning a policy that maximizes a specific discrete metric formulated based on the constraints required to generate adversarial examples. In our work, we use the self-critical approach of Rennie et al. BIBREF36 as our policy gradient training algorithm.\n\nTraining ::: Training with Reinforcement learning ::: Self-critical sequence training (SCST)\nIn SCST approach, the model learns to gather more rewards from its sampled sequences that bring higher rewards than its best greedy counterparts. First, we compute two sequences: (a) $y^{\\prime }$ sampled from the model's distribution $p(y^{\\prime }_j|y^{\\prime }_{\n\nHere $r(\\hat{y})$ can be viewed as the baseline reward. This approach, therefore, explores different sequences that produce higher reward compared to the current best policy.",
"Training ::: Training with Reinforcement learning\nWe fine-tune our model to fool a target classifier by learning a policy that maximizes a specific discrete metric formulated based on the constraints required to generate adversarial examples. In our work, we use the self-critical approach of Rennie et al. BIBREF36 as our policy gradient training algorithm.\n\nTraining ::: Training with Reinforcement learning ::: Self-critical sequence training (SCST)\nIn SCST approach, the model learns to gather more rewards from its sampled sequences that bring higher rewards than its best greedy counterparts. First, we compute two sequences: (a) $y^{\\prime }$ sampled from the model's distribution $p(y^{\\prime }_j|y^{\\prime }_{\n\nHere $r(\\hat{y})$ can be viewed as the baseline reward. This approach, therefore, explores different sequences that produce higher reward compared to the current best policy."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this task of adversarial example generation, we have black-box access to the target model; the generator is not aware of the target model architecture or parameters and is only capable of querying the target model with supplied inputs and obtaining the output predictions. To enable the model to have capabilities to generate word and character perturbations, we develop a hybrid encoder-decoder model, Adversarial Examples Generator (AEG), that operates at both word and character level to generate adversarial examples. Below, we explain the components of this model which have been improved to handle both word and character information from the text sequence.",
"The encoder maps the input text sequence into a sequence of representations using word and character-level information. Our encoder (Figure FIGREF10) is a slight variant of Chen et al.BIBREF31. This approach providing multiple levels of granularity can be useful in order to handle rare or noisy words in the text. Given character embeddings $E^{(c)}=[e_1^{(c)}, e_2^{(c)},...e_{n^{\\prime }}^{(c)}]$ and word embeddings $E^{(w)}=[e_1^{(w)}, e_2^{(w)},...e_{n}^{(w)}]$ of the input, starting ($p_t$) and ending ($q_t$) character positions at time step $t$, we define inside character embeddings as: $E_I^{(c)}=[e_{p_t}^{(c)},...., e_{q_t}^{(c)}]$ and outside embeddings as: $E_O^{(c)}=[e_{1}^{(c)},....,e_{p_t-1}^{(c)}; e_{q_t+1}^{(c)},...,e_{n^{\\prime }}^{(c)}]$. First, we obtain the character-enhanced word representation $\\overleftrightarrow{h_t}$ by combining the word information from $E^{(w)}$ with the character context vectors. Character context vectors are obtained by attending over inside and outside character embeddings. Next, we compute a summary vector $S$ over the hidden states $\\overleftrightarrow{h_t}$ using an attention layer expressed as $Attn(\\overleftrightarrow{H})$. To generate adversarial examples, it is important to identify the most relevant text units that contribute towards the target model's prediction and then use this information during the decoding step to introduce perturbation on those units. Hence, the summary vector is optimized using target model predictions without back propagating through the entire encoder. This acts as a substitute network that learns to mimic the predictions of the target classifier.",
"Our AEG should be able to generate both character and word level perturbations as necessary. We achieve this by modifying the standard decoder BIBREF29, BIBREF30 to have two-level decoder GRUs: word-GRU and character-GRU (see Figure FIGREF14). Such hybrid approaches have been studied to achieve open vocabulary NMT in some of the previous work like Wu et al. BIBREF32 and Luong et al. BIBREF25. Given the challenge that all different word misspellings cannot fit in a fixed vocabulary, we leverage the power of both words and characters in our generation procedure. The word-GRU uses word context vector $c_j^{(w)}$ by attending over the encoder hidden states $\\overleftrightarrow{h_t}$. Once the word context vector $c_j^{(w)}$ is computed, we introduce a perturbation vector $v_{p}$ to impart information about the need for any word or character perturbations at this decoding step. We construct this vector using the word-GRU decoder state $s_j^{(w)}$, context vector $c_j^{(w)}$ and summary vector $S$ from the encoder as:"
],
"extractive_spans": [
"able to generate both character and word level perturbations as necessary",
"modifying the standard decoder BIBREF29, BIBREF30 to have two-level decoder GRUs: word-GRU and character-GRU"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this task of adversarial example generation, we have black-box access to the target model; the generator is not aware of the target model architecture or parameters and is only capable of querying the target model with supplied inputs and obtaining the output predictions. To enable the model to have capabilities to generate word and character perturbations, we develop a hybrid encoder-decoder model, Adversarial Examples Generator (AEG), that operates at both word and character level to generate adversarial examples.",
"The encoder maps the input text sequence into a sequence of representations using word and character-level information.",
"Our AEG should be able to generate both character and word level perturbations as necessary. We achieve this by modifying the standard decoder BIBREF29, BIBREF30 to have two-level decoder GRUs: word-GRU and character-GRU (see Figure FIGREF14)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they manually check all adversarial examples that fooled some model for potential valid examples?",
"Are all generated examples semantics-preserving perturbations to the original text?",
"What is success rate of fooling tested models in experiments?",
"What models are able to be fooled for AG's news corpus news categorization task by this approach?",
"What models are able to be fooled for IMDB sentiment classification task by this approach?",
"Do they use already trained model on some task in their reinforcement learning approach?",
"How does proposed reinforcement learning based approach generate adversarial examples in black-box settings?"
],
"question_id": [
"573b8b1ad919d3fd0ef7df84e55e5bfd165b3e84",
"07d98dfa88944abd12acd45e98fb7d3719986aeb",
"3a40559e5a3c2a87c7b9031c89e762b828249c05",
"5db47bbb97282983e10414240db78154ea7ac75f",
"c589d83565f528b87e355b9280c1e7143a42401d",
"7f90e9390ad58b22b362a57330fff1c7c2da7985",
"3e3e45094f952704f1f679701470c3dbd845999e"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Illustration of Encoder.",
"Fig. 2. Illustration of the word and character decoder.",
"Table 1. Summary of data and models used in our experiments.",
"Table 2. Left: Performance of our AEG model on IMDB and AG’s News dataset using word and character based CNN models respectively. Results indicate the percentage dip in the accuracy by using the corresponding attacking model over the original accuracy. Right: Performance of different variants of our model.",
"Fig. 3. Left: Examples from IMDB reviews dataset, where the model introduces misspellings or paraphrases that are sufficient to fool the target classifier. Right: Effect of coefficients of the reward function. The first line is the text from the AG’s news corpus. The second line is the generated by the model given specific constraints on the reward coefficients. The examples do not necessarily lead to misclassification. The text in green are attention scores indicating relevance of classification. The text in red are the perturbations introduced by our model."
],
"file": [
"7-Figure1-1.png",
"8-Figure2-1.png",
"11-Table1-1.png",
"12-Table2-1.png",
"13-Figure3-1.png"
]
} | [
"Do they manually check all adversarial examples that fooled some model for potential valid examples?",
"Are all generated examples semantics-preserving perturbations to the original text?",
"What is success rate of fooling tested models in experiments?",
"What models are able to be fooled for AG's news corpus news categorization task by this approach?",
"What models are able to be fooled for IMDB sentiment classification task by this approach?"
] | [
[
"1909.07873-Experiments ::: Human Evaluation-0"
],
[
"1909.07873-Experiments ::: Setup-4",
"1909.07873-Training ::: Training with Reinforcement learning ::: Rewards ::: Semantic Similarity-0",
"1909.07873-Training ::: Training with Reinforcement learning ::: Rewards-0",
"1909.07873-Experiments ::: Setup-5",
"1909.07873-Proposed Attack Strategy-0",
"1909.07873-Introduction-2",
"1909.07873-Training ::: Training with Reinforcement learning ::: Rewards ::: Lexical Similarity-0",
"1909.07873-Experiments ::: Setup-3",
"1909.07873-Related Work-2",
"1909.07873-Experiments ::: Setup-6",
"1909.07873-Conclusion-0",
"1909.07873-Training ::: Supervised Pretraining with Teacher Forcing-0",
"1909.07873-Experiments ::: Setup-7",
"1909.07873-Proposed Attack Strategy-1"
],
[
"1909.07873-Experiments ::: Quantitative Analysis-0",
"1909.07873-Experiments ::: Setup-8",
"1909.07873-12-Table2-1.png"
],
[
"1909.07873-Experiments ::: Setup-0",
"1909.07873-Experiments ::: Setup-1",
"1909.07873-Experiments ::: Setup-2",
"1909.07873-Experiments ::: Quantitative Analysis-0"
],
[
"1909.07873-Experiments ::: Setup-1",
"1909.07873-Experiments ::: Quantitative Analysis-0"
]
] | [
"Only 100 successfully adversarial examples were manually checked, not all of them.",
"While the models aim to generate examples which preserve the semantics of the text with minimal perturbations, the Random model randomly replaces a character, which may not preserve the semantics. ",
"Authors best attacking model resulted in dip in the accuracy of CNN-Word (IMDB) by 79.43% and CNN-Char (AG's News) model by 72.16%",
"A word-based convolutional model (CNN-Word) and a character-based convolutional model (CNN-Char)",
"A word-based convolutional neural network (CNN-Word)"
] | 207 |
1906.01502 | How multilingual is Multilingual BERT? | In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2018) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs. | {
"paragraphs": [
[
"Deep, contextualized language models provide powerful, general-purpose linguistic representations that have enabled significant advances among a wide range of natural language processing tasks BIBREF1 , BIBREF0 . These models can be pre-trained on large corpora of readily available unannotated text, and then fine-tuned for specific tasks on smaller amounts of supervised data, relying on the induced language model structure to facilitate generalization beyond the annotations. Previous work on model probing has shown that these representations are able to encode, among other things, syntactic and named entity information, but they have heretofore focused on what models trained on English capture about English BIBREF2 , BIBREF3 , BIBREF4 .",
"In this paper, we empirically investigate the degree to which these representations generalize across languages. We explore this question using Multilingual BERT (henceforth, M-Bert), released by BIBREF0 as a single language model pre-trained on the concatenation of monolingual Wikipedia corpora from 104 languages. M-Bert is particularly well suited to this probing study because it enables a very straightforward approach to zero-shot cross-lingual model transfer: we fine-tune the model using task-specific supervised training data from one language, and evaluate that task in a different language, thus allowing us to observe the ways in which the model generalizes information across languages.",
"Our results show that M-Bert is able to perform cross-lingual generalization surprisingly well. More importantly, we present the results of a number of probing experiments designed to test various hypotheses about how the model is able to perform this transfer. Our experiments show that while high lexical overlap between languages improves transfer, M-Bert is also able to transfer between languages written in different scripts—thus having zero lexical overlap—indicating that it captures multilingual representations. We further show that transfer works best for typologically similar languages, suggesting that while M-Bert's multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order."
],
[
"Like the original English BERT model (henceforth, En-Bert), M-Bert is a 12 layer transformer BIBREF0 , but instead of being trained only on monolingual English data with an English-derived vocabulary, it is trained on the Wikipedia pages of 104 languages with a shared word piece vocabulary. It does not use any marker denoting the input language, and does not have any explicit mechanism to encourage translation-equivalent pairs to have similar representations.",
"For ner and pos, we use the same sequence tagging architecture as BIBREF0 . We tokenize the input sentence, feed it to Bert, get the last layer's activations, and pass them through a final layer to make the tag predictions. The whole model is then fine-tuned to minimize the cross entropy loss for the task. When tokenization splits words into multiple pieces, we take the prediction for the first piece as the prediction for the word."
],
[
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories. Table TABREF4 shows M-Bert zero-shot performance on all language pairs in the CoNLL data."
],
[
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages. We use the evaluation sets from BIBREF8 . Table TABREF7 shows M-Bert zero-shot results for four European languages. We see that M-Bert generalizes well across languages, achieving over INLINEFORM0 accuracy for all pairs."
],
[
"Because M-Bert uses a single, multilingual vocabulary, one form of cross-lingual transfer occurs when word pieces present during fine-tuning also appear in the evaluation languages. In this section, we present experiments probing M-Bert's dependence on this superficial form of generalization: How much does transferability depend on lexical overlap? And is transfer possible to languages written in different scripts (no overlap)?"
],
[
"If M-Bert's ability to generalize were mostly due to vocabulary memorization, we would expect zero-shot performance on ner to be highly dependent on word piece overlap, since entities are often similar across languages. To measure this effect, we compute INLINEFORM0 and INLINEFORM1 , the sets of word pieces used in entities in the training and evaluation datasets, respectively, and define overlap as the fraction of common word pieces used in the entities: INLINEFORM2 .",
"Figure FIGREF9 plots ner F1 score versus entity overlap for zero-shot transfer between every language pair in an in-house dataset of 16 languages, for both M-Bert and En-Bert. We can see that performance using En-Bert depends directly on word piece overlap: the ability to transfer deteriorates as word piece overlap diminishes, and F1 scores are near zero for languages written in different scripts. M-Bert's performance, on the other hand, is flat for a wide range of overlaps, and even for language pairs with almost no lexical overlap, scores vary between INLINEFORM0 and INLINEFORM1 , showing that M-Bert's pretraining on multiple languages has enabled a representational capacity deeper than simple vocabulary memorization.",
"To further verify that En-Bert's inability to generalize is due to its lack of a multilingual representation and not an inability of its English-specific word piece vocabulary to represent data in other languages, we evaluate on non-cross-lingual ner and see that it performs comparably to a previous state of the art model (see Table TABREF12 )."
],
[
"M-Bert's ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap, is surprising given that it was trained on separate monolingual corpora and not with a multilingual objective. To probe deeper into how the model is able to perform this generalization, Table TABREF14 shows a sample of pos results for transfer across scripts.",
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word. This provides clear evidence of M-Bert's multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data.",
"However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-Bert's multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section SECREF18 , is typological similarity. English and Japanese have a different order of subject, verb and object, while English and Bulgarian have the same, and M-Bert may be having trouble generalizing across different orderings."
],
[
"In the previous section, we showed that M-Bert's ability to generalize cannot be attributed solely to vocabulary memorization, and that it must be learning a deeper multilingual representation. In this section, we present probing experiments that investigate the nature of that representation: How does typological similarity affect M-Bert's ability to generalize? Can M-Bert generalize from monolingual inputs to code-switching text? Can the model generalize to transliterated text without transliterated language model pretraining?"
],
[
"Following BIBREF10 , we compare languages on a subset of the WALS features BIBREF11 relevant to grammatical ordering. Figure FIGREF17 plots pos zero-shot accuracy against the number of common WALS features. As expected, performance improves with similarity, showing that it is easier for M-Bert to map linguistic structures when they are more similar, although it still does a decent job for low similarity languages when compared to En-Bert."
],
[
"Table TABREF20 shows macro-averaged pos accuracies for transfer between languages grouped according to two typological features: subject/object/verb order, and adjective/noun order BIBREF11 . The results reported include only zero-shot transfer, i.e. they do not include cases training and testing on the same language. We can see that performance is best when transferring between languages that share word order features, suggesting that while M-Bert's multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order."
],
[
"Code-switching (CS)—the mixing of multiple languages within a single utterance—and transliteration—writing that is not in the language's standard script—present unique test cases for M-Bert, which is pre-trained on monolingual, standard-script corpora. Generalizing to code-switching is similar to other cross-lingual transfer scenarios, but would benefit to an even larger degree from a shared multilingual representation. Likewise, generalizing to transliterated text is similar to other cross-script transfer experiments, but has the additional caveat that M-Bert was not pre-trained on text that looks like the target.",
"We test M-Bert on the CS Hindi/English UD corpus from BIBREF12 , which provides texts in two formats: transliterated, where Hindi words are written in Latin script, and corrected, where annotators have converted them back to Devanagari script. Table TABREF22 shows the results for models fine-tuned using a combination of monolingual Hindi and English, and using the CS training set (both fine-tuning on the script-corrected version of the corpus as well as the transliterated version).",
"For script-corrected inputs, i.e., when Hindi is written in Devanagari, M-Bert's performance when trained only on monolingual corpora is comparable to performance when training on code-switched data, and it is likely that some of the remaining difference is due to domain mismatch. This provides further evidence that M-Bert uses a representation that is able to incorporate information from multiple languages.",
"However, M-Bert is not able to effectively transfer to a transliterated target, suggesting that it is the language model pre-training on a particular language that allows transfer to that language. M-Bert is outperformed by previous work in both the monolingual-only and code-switched supervision scenarios. Neither BIBREF13 nor BIBREF12 use contextualized word embeddings, but both incorporate explicit transliteration signals into their approaches."
],
[
"In this section, we study the structure of M-Bert's feature space. If it is multilingual, then the transformation mapping between the same sentence in 2 languages should not depend on the sentence itself, just on the language pair."
],
[
"We sample 5000 pairs of sentences from WMT16 BIBREF14 and feed each sentence (separately) to M-Bert with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except [cls] and [sep], to get a vector for each sentence, at each layer INLINEFORM0 , INLINEFORM1 . For each pair of sentences, e.g. INLINEFORM2 , we compute the vector pointing from one to the other and average it over all pairs: INLINEFORM3 , where INLINEFORM4 is the number of pairs. Finally, we translate each sentence, INLINEFORM5 , by INLINEFORM6 , find the closest German sentence vector, and measure the fraction of times the nearest neighbour is the correct pair, which we call the “nearest neighbor accuracy”."
],
[
"In Figure FIGREF27 , we plot the nearest neighbor accuracy for en-de (solid line). It achieves over INLINEFORM0 accuracy for all but the bottom layers, which seems to imply that the hidden representations, although separated in space, share a common subspace that represents useful linguistic information, in a language-agnostic way. Similar curves are obtained for en-ru, and ur-hi (in-house dataset), showing this works for multiple languages.",
"As to the reason why the accuracy goes down in the last few layers, one possible explanation is that since the model was pre-trained for language modeling, it might need more language-specific information to correctly predict the missing word."
],
[
"In this work, we showed that M-Bert's robust, often surprising, ability to generalize cross-lingually is underpinned by a multilingual representation, without being explicitly trained for it. The model handles transfer across scripts and to code-switching fairly well, but effective transfer to typologically divergent and transliterated targets will likely require the model to incorporate an explicit multilingual training objective, such as that used by BIBREF15 or BIBREF16 .",
"As to why M-Bert generalizes across languages, we hypothesize that having word pieces used in all languages (numbers, URLs, etc) which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space.",
"It is our hope that these kinds of probing experiments will help steer researchers toward the most promising lines of inquiry by encouraging them to focus on the places where current contextualized word representation approaches fall short."
],
[
"We would like to thank Mark Omernick, Livio Baldini Soares, Emily Pitler, Jason Riesa, and Slav Petrov for the valuable discussions and feedback."
],
[
"All models were fine-tuned with a batch size of 32, and a maximum sequence length of 128 for 3 epochs. We used a learning rate of INLINEFORM0 with learning rate warmup during the first INLINEFORM1 of steps, and linear decay afterwards. We also applied INLINEFORM2 dropout on the last layer. No parameter tuning was performed. We used the BERT-Base, Multilingual Cased checkpoint from https://github.com/google-research/bert."
]
],
"section_name": [
"Introduction",
"Models and Data",
"Named entity recognition experiments",
"Part of speech tagging experiments",
"Vocabulary Memorization ",
"Effect of vocabulary overlap",
"Generalization across scripts",
"Encoding Linguistic Structure ",
"Effect of language similarity",
"Generalizing across typological features ",
"Code switching and transliteration",
"Multilingual characterization of the feature space ",
"Experimental Setup",
"Results",
"Conclusion",
"Acknowledgements",
"Model Parameters"
]
} | {
"answers": [
{
"annotation_id": [
"43001237c0da4a8020d67900ded9ca8546db9520",
"77495c2445a21bc5feb3a0212ca2fc9944a0fc83"
],
"answer": [
{
"evidence": [
"M-Bert's ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap, is surprising given that it was trained on separate monolingual corpora and not with a multilingual objective. To probe deeper into how the model is able to perform this generalization, Table TABREF14 shows a sample of pos results for transfer across scripts.",
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word. This provides clear evidence of M-Bert's multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data.",
"However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-Bert's multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section SECREF18 , is typological similarity. English and Japanese have a different order of subject, verb and object, while English and Bulgarian have the same, and M-Bert may be having trouble generalizing across different orderings."
],
"extractive_spans": [
"Urdu",
"Hindi",
"English",
"Japanese",
"Bulgarian"
],
"free_form_answer": "",
"highlighted_evidence": [
"M-Bert's ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap, is surprising given that it was trained on separate monolingual corpora and not with a multilingual objective. To probe deeper into how the model is able to perform this generalization, Table TABREF14 shows a sample of pos results for transfer across scripts.",
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word. This provides clear evidence of M-Bert's multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data.",
"However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-Bert's multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section SECREF18 , is typological similarity. English and Japanese have a different order of subject, verb and object, while English and Bulgarian have the same, and M-Bert may be having trouble generalizing across different orderings."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word. This provides clear evidence of M-Bert's multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data.",
"However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-Bert's multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section SECREF18 , is typological similarity. English and Japanese have a different order of subject, verb and object, while English and Bulgarian have the same, and M-Bert may be having trouble generalizing across different orderings."
],
"extractive_spans": [
"Urdu",
"Hindi",
"English",
"Japanese",
"Bulgarian"
],
"free_form_answer": "",
"highlighted_evidence": [
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word.",
"However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-Bert's multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section SECREF18 , is typological similarity. English and Japanese have a different order of subject, verb and object, while English and Bulgarian have the same, and M-Bert may be having trouble generalizing across different orderings."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4dea65d5d31588e40fe1342783b89e8d150e54cf",
"92be1a8f94849ae4afffbb8c51e152024fe528b6"
],
"answer": [
{
"evidence": [
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories. Table TABREF4 shows M-Bert zero-shot performance on all language pairs in the CoNLL data."
],
"extractive_spans": [
"Dutch",
"Spanish",
"English",
"German"
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories. Table TABREF4 shows M-Bert zero-shot performance on all language pairs in the CoNLL data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages. We use the evaluation sets from BIBREF8 . Table TABREF7 shows M-Bert zero-shot results for four European languages. We see that M-Bert generalizes well across languages, achieving over INLINEFORM0 accuracy for all pairs.",
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories. Table TABREF4 shows M-Bert zero-shot performance on all language pairs in the CoNLL data."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (subscripts 2 and 3)\nNER task: Arabic, Bengali, Czech, German, English, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Russian, Turkish, and Chinese.\nPOS task: Arabic, Bulgarian, Catalan, Czech, Danish, German, Greek, English, Spanish, Estonian, Basque, Persian, Finnish, French, Galician, Hebrew, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Marathi, Dutch, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, Tamil, Telugu, Turkish, Urdu, and Chinese.",
"highlighted_evidence": [
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages.",
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"52f1dca0443ba6a551d3a3697e96a7c700197503",
"afeafd200e39695cb72b77129a054bba9bbb25da"
],
"answer": [
{
"evidence": [
"M-Bert's ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap, is surprising given that it was trained on separate monolingual corpora and not with a multilingual objective. To probe deeper into how the model is able to perform this generalization, Table TABREF14 shows a sample of pos results for transfer across scripts.",
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word. This provides clear evidence of M-Bert's multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data.",
"However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-Bert's multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section SECREF18 , is typological similarity. English and Japanese have a different order of subject, verb and object, while English and Bulgarian have the same, and M-Bert may be having trouble generalizing across different orderings.",
"Generalizing across typological features ",
"Table TABREF20 shows macro-averaged pos accuracies for transfer between languages grouped according to two typological features: subject/object/verb order, and adjective/noun order BIBREF11 . The results reported include only zero-shot transfer, i.e. they do not include cases training and testing on the same language. We can see that performance is best when transferring between languages that share word order features, suggesting that while M-Bert's multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order.",
"Our results show that M-Bert is able to perform cross-lingual generalization surprisingly well. More importantly, we present the results of a number of probing experiments designed to test various hypotheses about how the model is able to perform this transfer. Our experiments show that while high lexical overlap between languages improves transfer, M-Bert is also able to transfer between languages written in different scripts—thus having zero lexical overlap—indicating that it captures multilingual representations. We further show that transfer works best for typologically similar languages, suggesting that while M-Bert's multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order."
],
"extractive_spans": [],
"free_form_answer": "Language pairs that are typologically different",
"highlighted_evidence": [
"M-Bert's ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap, is surprising given that it was trained on separate monolingual corpora and not with a multilingual objective.",
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word. This provides clear evidence of M-Bert's multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data.",
"However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-Bert's multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section SECREF18 , is typological similarity. English and Japanese have a different order of subject, verb and object, while English and Bulgarian have the same, and M-Bert may be having trouble generalizing across different orderings.\n\n",
"Generalizing across typological features\nTable TABREF20 shows macro-averaged pos accuracies for transfer between languages grouped according to two typological features: subject/object/verb order, and adjective/noun order BIBREF11 . The results reported include only zero-shot transfer, i.e. they do not include cases training and testing on the same language. We can see that performance is best when transferring between languages that share word order features, suggesting that while M-Bert's multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order.",
"\nOur results show that M-Bert is able to perform cross-lingual generalization surprisingly well. More importantly, we present the results of a number of probing experiments designed to test various hypotheses about how the model is able to perform this transfer. Our experiments show that while high lexical overlap between languages improves transfer, M-Bert is also able to transfer between languages written in different scripts—thus having zero lexical overlap—indicating that it captures multilingual representations. We further show that transfer works best for typologically similar languages, suggesting that while M-Bert's multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ab4da098e9242f565fd3084da4596c4dec9d3e38",
"bb410f146261cae7d2e20a4dadf46ef4476fd1e3"
],
"answer": [
{
"evidence": [
"Figure FIGREF9 plots ner F1 score versus entity overlap for zero-shot transfer between every language pair in an in-house dataset of 16 languages, for both M-Bert and En-Bert. We can see that performance using En-Bert depends directly on word piece overlap: the ability to transfer deteriorates as word piece overlap diminishes, and F1 scores are near zero for languages written in different scripts. M-Bert's performance, on the other hand, is flat for a wide range of overlaps, and even for language pairs with almost no lexical overlap, scores vary between INLINEFORM0 and INLINEFORM1 , showing that M-Bert's pretraining on multiple languages has enabled a representational capacity deeper than simple vocabulary memorization.",
"Following BIBREF10 , we compare languages on a subset of the WALS features BIBREF11 relevant to grammatical ordering. Figure FIGREF17 plots pos zero-shot accuracy against the number of common WALS features. As expected, performance improves with similarity, showing that it is easier for M-Bert to map linguistic structures when they are more similar, although it still does a decent job for low similarity languages when compared to En-Bert."
],
"extractive_spans": [
"ner F1 score",
"pos zero-shot accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Figure FIGREF9 plots ner F1 score versus entity overlap for zero-shot transfer between every language pair in an in-house dataset of 16 languages, for both M-Bert and En-Bert.",
"Figure FIGREF17 plots pos zero-shot accuracy against the number of common WALS features."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages. We use the evaluation sets from BIBREF8 . Table TABREF7 shows M-Bert zero-shot results for four European languages. We see that M-Bert generalizes well across languages, achieving over INLINEFORM0 accuracy for all pairs.",
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word. This provides clear evidence of M-Bert's multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data.",
"Following BIBREF10 , we compare languages on a subset of the WALS features BIBREF11 relevant to grammatical ordering. Figure FIGREF17 plots pos zero-shot accuracy against the number of common WALS features. As expected, performance improves with similarity, showing that it is easier for M-Bert to map linguistic structures when they are more similar, although it still does a decent job for low similarity languages when compared to En-Bert.",
"We sample 5000 pairs of sentences from WMT16 BIBREF14 and feed each sentence (separately) to M-Bert with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except [cls] and [sep], to get a vector for each sentence, at each layer INLINEFORM0 , INLINEFORM1 . For each pair of sentences, e.g. INLINEFORM2 , we compute the vector pointing from one to the other and average it over all pairs: INLINEFORM3 , where INLINEFORM4 is the number of pairs. Finally, we translate each sentence, INLINEFORM5 , by INLINEFORM6 , find the closest German sentence vector, and measure the fraction of times the nearest neighbour is the correct pair, which we call the “nearest neighbor accuracy”."
],
"extractive_spans": [
"accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages. We use the evaluation sets from BIBREF8 . Table TABREF7 shows M-Bert zero-shot results for four European languages. We see that M-Bert generalizes well across languages, achieving over INLINEFORM0 accuracy for all pairs.",
"Among the most surprising results, an M-Bert model that has been fine-tuned using only pos-labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single pos-tagged Devanagari word.",
"Following BIBREF10 , we compare languages on a subset of the WALS features BIBREF11 relevant to grammatical ordering. Figure FIGREF17 plots pos zero-shot accuracy against the number of common WALS features.",
"We sample 5000 pairs of sentences from WMT16 BIBREF14 and feed each sentence (separately) to M-Bert with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except [cls] and [sep], to get a vector for each sentence, at each layer INLINEFORM0 , INLINEFORM1 . For each pair of sentences, e.g. INLINEFORM2 , we compute the vector pointing from one to the other and average it over all pairs: INLINEFORM3 , where INLINEFORM4 is the number of pairs. Finally, we translate each sentence, INLINEFORM5 , by INLINEFORM6 , find the closest German sentence vector, and measure the fraction of times the nearest neighbour is the correct pair, which we call the “nearest neighbor accuracy”."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"550655faf3d3688bab85d62bd4d4fa105270766a",
"cc0615bce5bd6b7b80c161109fba5463717a60fd"
],
"answer": [
{
"evidence": [
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories. Table TABREF4 shows M-Bert zero-shot performance on all language pairs in the CoNLL data.",
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages. We use the evaluation sets from BIBREF8 . Table TABREF7 shows M-Bert zero-shot results for four European languages. We see that M-Bert generalizes well across languages, achieving over INLINEFORM0 accuracy for all pairs.",
"We sample 5000 pairs of sentences from WMT16 BIBREF14 and feed each sentence (separately) to M-Bert with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except [cls] and [sep], to get a vector for each sentence, at each layer INLINEFORM0 , INLINEFORM1 . For each pair of sentences, e.g. INLINEFORM2 , we compute the vector pointing from one to the other and average it over all pairs: INLINEFORM3 , where INLINEFORM4 is the number of pairs. Finally, we translate each sentence, INLINEFORM5 , by INLINEFORM6 , find the closest German sentence vector, and measure the fraction of times the nearest neighbour is the correct pair, which we call the “nearest neighbor accuracy”."
],
"extractive_spans": [
"CoNLL-2002 and -2003 ",
"Universal Dependencies",
"WMT16 "
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories.",
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages.",
"We sample 5000 pairs of sentences from WMT16 BIBREF14 and feed each sentence (separately) to M-Bert with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except [cls] and [sep], to get a vector for each sentence, at each layer INLINEFORM0 , INLINEFORM1 . For each pair of sentences, e.g. INLINEFORM2 , we compute the vector pointing from one to the other and average it over all pairs: INLINEFORM3 , where INLINEFORM4 is the number of pairs. Finally, we translate each sentence, INLINEFORM5 , by INLINEFORM6 , find the closest German sentence vector, and measure the fraction of times the nearest neighbour is the correct pair, which we call the “nearest neighbor accuracy”."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories. Table TABREF4 shows M-Bert zero-shot performance on all language pairs in the CoNLL data.",
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages. We use the evaluation sets from BIBREF8 . Table TABREF7 shows M-Bert zero-shot results for four European languages. We see that M-Bert generalizes well across languages, achieving over INLINEFORM0 accuracy for all pairs."
],
"extractive_spans": [
"CoNLL-2002 and -2003 sets",
"an in-house dataset with 16 languages",
"Universal Dependencies (UD) BIBREF7"
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform ner experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German BIBREF5 , BIBREF6 ; and an in-house dataset with 16 languages, using the same CoNLL categories.",
"We perform pos experiments using Universal Dependencies (UD) BIBREF7 data for 41 languages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Which languages with different script do they look at?",
"What languages do they experiment with?",
"What language pairs are affected?",
"What evaluation metrics are used?",
"What datasets did they use?"
],
"question_id": [
"475ef4ad32a8589dae9d97048166d732ae5d7beb",
"3fd8eab282569b1c18b82f20d579b335ae70e79f",
"8e9561541f2e928eb239860c2455a254b5aceaeb",
"50c1bf8b928069f3ffc7f0cb00aa056a163ef336",
"2ddfb40a9e73f382a2eb641c8e22bbb80cef017b"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: NER F1 results on the CoNLL data.",
"Table 2: POS accuracy on a subset of UD languages.",
"Figure 1: Zero-shot NER F1 score versus entity word piece overlap among 16 languages. While performance using EN-BERT depends directly on word piece overlap, M-BERT’s performance is largely independent of overlap, indicating that it learns multilingual representations deeper than simple vocabulary memorization.",
"Table 3: NER F1 results fine-tuning and evaluating on the same language (not zero-shot transfer).",
"Table 4: POS accuracy on the UD test set for languages with different scripts. Row=fine-tuning, column=eval.",
"Table 6: M-BERT’s POS accuracy on the code-switched Hindi/English dataset from Bhat et al. (2018), on script-corrected and original (transliterated) tokens, and comparisons to existing work on code-switch POS.",
"Figure 2: Zero-shot POS accuracy versus number of common WALS features. Due to their scarcity, we exclude pairs with no common features.",
"Table 5: Macro-average POS accuracies when transferring between SVO/SOV languages or AN/NA languages. Row = fine-tuning, column = evaluation.",
"Figure 3: Accuracy of nearest neighbor translation for EN-DE, EN-RU, and HI-UR.",
"Table 8: POS accuracy on the UD test sets for a subset of European languages using EN-BERT. The row specifies a fine-tuning language, the column the evaluation language. There is a big gap between this model’s zeroshot performance and M-BERT’s, showing the pretraining is helping learn a useful cross-lingual representation for grammar."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"2-Figure1-1.png",
"3-Table3-1.png",
"3-Table4-1.png",
"4-Table6-1.png",
"4-Figure2-1.png",
"4-Table5-1.png",
"5-Figure3-1.png",
"6-Table8-1.png"
]
} | [
"What languages do they experiment with?",
"What language pairs are affected?"
] | [
[
"1906.01502-Named entity recognition experiments-0",
"1906.01502-Part of speech tagging experiments-0"
],
[
"1906.01502-Generalizing across typological features -0",
"1906.01502-Generalization across scripts-0",
"1906.01502-Introduction-2",
"1906.01502-Generalization across scripts-2",
"1906.01502-Generalization across scripts-1"
]
] | [
"Answer with content missing: (subscripts 2 and 3)\nNER task: Arabic, Bengali, Czech, German, English, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Russian, Turkish, and Chinese.\nPOS task: Arabic, Bulgarian, Catalan, Czech, Danish, German, Greek, English, Spanish, Estonian, Basque, Persian, Finnish, French, Galician, Hebrew, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Marathi, Dutch, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, Tamil, Telugu, Turkish, Urdu, and Chinese.",
"Language pairs that are typologically different"
] | 208 |
1611.00440 | And the Winner is ...: Bayesian Twitter-based Prediction on 2016 U.S. Presidential Election | This paper describes a Naive-Bayesian predictive model for 2016 U.S. Presidential Election based on Twitter data. We use 33,708 tweets gathered since December 16, 2015 until February 29, 2016. We introduce a simpler data preprocessing method to label the data and train the model. The model achieves 95.8% accuracy on 10-fold cross validation and predicts Ted Cruz and Bernie Sanders as Republican and Democratic nominee respectively. It achieves a comparable result to those in its competitor methods. | {
"paragraphs": [
[
"Presidential election is an important moment for every country, including the United States. Their economic policies, which are set by the government, affect the economy of other countries BIBREF0 . On 2016 U.S. Presidential Election, Republican and Democratic candidates use Twitter as their campaign media. Previous researches have predicted the outcome of U.S. presidential election using Twitter BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Some of them proved that Twitter data can complement or even predict the poll results. This follows the increasing improvement in the text mining researches BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .",
"Some of the most recent studies are BIBREF3 , BIBREF2 , BIBREF1 , BIBREF10 . Below we discuss these three recent studies and explain how our study relates to theirs. The first study is done by BIBREF3 , which analyzed the sentiment on 2008 U.S. Presidential Candidates by calculating sentiment ratio using moving average. They counted the sentiment value for Obama and McCain based on number of the positive and negative words stated on each tweet. The tweets were gathered during 2008-2009, whereas the positive and negative words were acquired from OpinionFinder. They found that the comparison between sentiment on tweets and polls were complex since people might choose \"Obama\", \"McCain\", \"have not decided\", \"not going to vote\", or any independent candidate on the polls.",
"The second study predicted the outcome of 2012 U.S. Presidential Election polls using Naive Bayesian models BIBREF2 . They collected over 32 million tweets from September 29 until November 16, 2012. They used Tweepy and set keywords for each candidate to collect the tweets, such as mitt romney, barack obama, us election. The collected tweets passed some preprocessing stages: (1) URL, mentions, hashtags, RT, and stop words removal; (2) tokenization; and (3) additional not_ for negation. They analyzed 10,000 randomly selected tweets which only contain a candidate name. The analysis results were compared to Huffington Post's polls and they found that Obama's popularity on Twitter represented the polls result. This research didn't use tweets with two or more candidate names since it requires more complex preprocessing methods.",
"The third study built a system for real-time sentiment analysis on 2012 U.S. Presidential Election to show public opinion about each candidate on Twitter BIBREF1 . They collected tweets for each candidates using Gnip Power Track since October 12, 2012 and tokenized them. The tweets were labeled by around 800 turkers on Amazon Mechanical Turk (AMT). They trained a Naive Bayes Classifier using 17,000 tweets which consists of 4 classes: (1) positive; (2) negative; (3) neutral; and (4) unsure. It achieved 59% accuracy, which is the best performance achieved in the three recent studies. They visualized the sentiment on a dashboard and calculated the trending words using TF-IDF.",
"As far as we know, there is not any research about prediction on 2016 U.S. Presidential Election yet. Previous researches either set the sentiment of a tweet directly based on a subjectivity lexicon BIBREF3 or preprocessed the tweet using a complex preprocessing method BIBREF1 , BIBREF2 . BIBREF2 not only removed URLs, mentions, retweets, hashtags, numbers and stop words; but also tokenized the tweets and added not_ on negative words. BIBREF1 tokenized the tweets and separated URLs, emoticons, phone numbers, HTML tags, mentions, hashtags, fraction or decimals, and symbol or Unicode character repetition. This research analyzes sentiment on tweets about 2016 U.S. Presidential candidates. We will build a Naive Bayesian predictive model for each candidate and compare the prediction with RealClearPolitics.com. We expect to have a correct prediction on the leading candidates for Democratic and Republican Party. We prove that using a simpler preprocessing method can still have comparable performance to the best performing recent study BIBREF1 .",
"We explain our data preparation methods in the next section. It is followed by our research methodology in Section III. We present our results in Section IV, which is followed by discussion and conclusion in Section V and VI."
],
[
"We gathered 371,264 tweets using Twitter Streaming API on Tweepy BIBREF2 since December 16, 2015 until February 29, 2016. We use #Election2016 as the search keyword since it is the official hashtag used during 2016 U.S. Presidential Election cycle and it covers conversations about all candidates. We separate the tweets per period, which is seven days. Figure 1 shows tweets frequency distribution, with the average of 37,126.4 tweets per period and standard deviation 27,823.82 tweets. Data collection from January 20 to January 26, 2016 are limited due to resource limitation. The data are saved as JSON files.",
"Each line of the JSON files represents a tweet, which consists of 26 main attributes, such as created_at, ID, text, retweet_count, and lang. We only use the contents of created_at and text attributes since this research focuses on the sentiment toward the candidates in a particular time, not including the geographic location and other information. The collected tweets are mainly written in English. We publish the raw and preprocessed tweets upon request for future use. The data are available for research use by email."
],
[
"We preprocess the data by: (1) removing URLs and pictures, also (2) by filtering tweets which have candidates' name. Hashtags, mentions, and retweets are not removed in order to maintain the original meaning of a tweet. We only save tweets which have passed the two requirements such as in Table 1. The first example shows no change in the tweet's content, since there isn't any URLs or pictures, and it contains a candidate's name: Bernie Sanders. The second example shows a removed tweet, which doesn't contain any candidates' name. The preprocessing stage changes the third tweet's contents. It removes the URLs and still keeps the tweet because it contains \"Hillary Clinton\" and \"Donald Trump\". The preprocessing stage removes 41% of the data (Figure 2)."
],
[
"The preprocessed tweets are labeled manually by 11 annotators who understand English. All annotators are given either grade as part of their coursework or souvenirs for their work. The given label consists of the intended candidate and the sentiment. The annotators interpret the tweet and decide whom the tweet relates to. If they think the tweets does not relate to particular candidate nor understand the content, they can choose \"not clear\" as the label. Otherwise, they can relate it to one candidate and label it as positive or negative. We divide the tweets and annotators into three groups (Table II). They label as many tweets as they can since January 24 until April 16, 2016.",
"The validity of the label is determined by means of majority rule BIBREF11 . Each tweet is distributed to three or five annotators and it is valid when there is a label which occurs the most. As the final data preparation step, we remove all \"not clear\" labeled tweets. Figure 3 shows the distribution of tweet labels. Most tweets are related to Bernie Sanders, Donald Trump, and Hillary Clinton."
],
[
"The presidential nominees are predicted by finding candidates with the most predicted positive sentiment. The sentiments are predicted using Bayesian model. This section describes: (1) the model training, (2) model accuracy test, and (3) prediction accuracy test."
],
[
"Our models are trained using Naive Bayes Classifier. We have one model representing each candidate, consequently we have 15 trained models. We use nltk.classify module on Natural Language Toolkit library on Python. We use the labeled data gathered since December 16, 2015 until February 2, 2016 as training data to our models. The rest of our labeled data will be used to evaluate the models."
],
[
"Our models' accuracy is tested using 10-fold cross validation. Model validation is done using scikit-learn library. The accuracy is calculated by checking the confusion matrix BIBREF12 , BIBREF13 and its INLINEFORM0 score BIBREF14 .",
"On some folds, the models predict the sentiment in extreme value (i.e. only have positive or negative outcomes). Due to these cases, we can not calculate INLINEFORM0 score of Chris Christie's model. The average accuracy and INLINEFORM1 score are 95.8% and 0.96 respectively.",
" INLINEFORM0 score only measures how well the model works on predicting positive sentiment, so we propose a modified INLINEFORM1 score ( INLINEFORM2 ) by reversing the formula. INLINEFORM3 score shows how well the model predicts negative sentiment. DISPLAYFORM0 ",
"The models show good accuracy and INLINEFORM0 score (Table III). It shows that the model can predict the test data almost perfectly (95.8%) with slightly better result on positive sentiment than negative ones, which can be seen by the larger value of INLINEFORM1 than INLINEFORM2 .",
"The test results do not show exact effect of training data and the model accuracy. Models with smaller number of training data (e.g. Huckabee's, Santorum's) achieve higher accuracy than models with larger number of training data (e.g. Trump's, Clinton's), while the lowest accuracy is achieved by Kasich's, which is trained with small number of training data. The undefined value of INLINEFORM0 and INLINEFORM1 scores on Christie's, Gilmore's, and Santorum's model shows extreme predictions on these models."
],
[
"The models use tweets gathered from February 3 until 9, 2016 as the prediction input. The prediction follows two steps: (1) we calculate the positive sentiment from tweets and consider the number of positive sentiment as the likelihood of a candidate to be the nominee, and (2) we sort the candidates by number of their positive sentiment. The ranks are compared to the poll results on RealClearPolitics.com. We calculate the error rate (E) by dividing the difference of the poll rank with our predicted rank with number of candidates ( INLINEFORM0 ). DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 and n equals the number of candidates. Po and Pre are the poll and prediction ranks associated with RealClearPolitics.com and the model respectively.",
"We use tweets on February 3-9, 2016 as the input to our models, regarding to the specified candidate. We rank the prediction result by sorting the number of positive predictions on each candidate. On Democratic Party, Bernie Sanders leads the rank with 3,335 tweets, followed by Martin O'Malley (14 tweets) and Hillary Clinton (none). The prediction ranks on Republican Party are (1) Ted Cruz (1,432 tweets), (2) Marco Rubio (1,239 tweets), (3) Rand Paul (645 tweets), (4) Rick Santorum (186 tweets), (5) John Kasich (133 tweets), (6) Carly Fiorina (88 tweets), (7) Mike Huckabee (11 tweets), and (8) Jim Gilmore (5 tweets). The other Republican candidates do not have any positive prediction, so we place them at the bottom rank.",
"Our model prediction ranks from 1 to 9 and it differs from the poll's (rank 1 to 8). Before we do the comparison, we adjust the prediction ranks in order to make an equal range. We move Jeb Bush, Ben Carson, Chris Christie, and Donald Trump, who are formerly on the 9th rank, to the 8th rank. We compare the prediction ranks with the poll and calculate the error rate. Our model gets 1.33 error of 2 remaining Democratic candidates, which we consider not good. Our model performs better on predicting Republican candidates, which achieves 1.67 error of 7 remaining candidates (see Table IV and V).",
"Overall prediction accuracy can be calculated by subtracting one with the average result of error rate division on each party by number of its remaining candidates. We achieve 0.548 prediction accuracy, which is not good enough BIBREF1 . The model accuracy is mainly affected by the large error rate on Democratic candidates (1.33 from 2 candidates)."
],
[
"Using simple preprocessed data, our Naive Bayesian model successfully achieves 95.8% accuracy on 10-fold cross validation and gets 54.8% accuracy on predicting the poll result. The model predicts Ted Cruz and Bernie Sanders as the nominee of Republican and Democratic Party respectively. Based on the positive predictions, it predicts that Bernie Sanders will be elected as the 2016 U.S. President.",
"Although it has 95.8% accuracy during the model test, the model's prediction does not represent the poll. Table III shows that model's accuracy is not dependent of its number of training data. Model with less training data (e.g. Mike Huckabee's) can perform perfectly during the model test and only misses a rank on the prediction, whereas model with more training data (e.g. Donald Trump's) can have worse performance.",
"To see how the model accuracy is affected by number of training data, we train more models for each candidate using n first tweets and use them to predict the next 4000 tweets' sentiment (see Figure 4). Bernie Sanders' and Donald Trump's models have the most consistent accuracy on predicting the sentiment. Models with less training data (e.g. Martin O'Malley, Jim Gilmore, Mike Huckabee) tend to have fluctuating accuracy. The models which are trained using 1,000 first tweets have 55.85% of average accuracy and 26.49% of standard deviation, whereas the models which are trained using 33,000 first tweets have slightly different accuracy: 65.75% of average accuracy and 27.79% of standard deviation. This shows that the number of training data does not affect the overall model accuracy.",
"Our model might not represent the poll, but the election is still ongoing and we do not know which candidate will become the next U.S. President. Hence, there is possibility that the predicted nominees become the next U.S. President. Otherwise, Twitter might not be used to predict the actual polls BIBREF15 ."
],
[
"We built Naive Bayesian predictive models for 2016 U.S. Presidential Election. We use the official hashtag and simple preprocessing method to prepare the data without modifying its meaning. Our model achieves 95.8% accuracy during the model test and predicts the poll with 54.8% accuracy. The model predicts that Bernie Sanders and Ted Cruz will become the nominees of Democratic and Republican Party respectively, and the election will be won by Bernie Sanders."
]
],
"section_name": [
"Introduction",
"Data Collection",
"Data Preprocessing",
"Data Labeling",
"Methodology",
"Model Training",
"Model Accuracy Test",
"Prediction Accuracy Test",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"00b9c6da2266bd020fa4343dc5c19e15f7668686",
"aa697d1a20323c47b3b2a48e1b9189e6bb69d091"
],
"answer": [
{
"evidence": [
"Overall prediction accuracy can be calculated by subtracting one with the average result of error rate division on each party by number of its remaining candidates. We achieve 0.548 prediction accuracy, which is not good enough BIBREF1 . The model accuracy is mainly affected by the large error rate on Democratic candidates (1.33 from 2 candidates)."
],
"extractive_spans": [
"BIBREF1"
],
"free_form_answer": "",
"highlighted_evidence": [
"We achieve 0.548 prediction accuracy, which is not good enough BIBREF1 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The third study built a system for real-time sentiment analysis on 2012 U.S. Presidential Election to show public opinion about each candidate on Twitter BIBREF1 . They collected tweets for each candidates using Gnip Power Track since October 12, 2012 and tokenized them. The tweets were labeled by around 800 turkers on Amazon Mechanical Turk (AMT). They trained a Naive Bayes Classifier using 17,000 tweets which consists of 4 classes: (1) positive; (2) negative; (3) neutral; and (4) unsure. It achieved 59% accuracy, which is the best performance achieved in the three recent studies. They visualized the sentiment on a dashboard and calculated the trending words using TF-IDF.",
"As far as we know, there is not any research about prediction on 2016 U.S. Presidential Election yet. Previous researches either set the sentiment of a tweet directly based on a subjectivity lexicon BIBREF3 or preprocessed the tweet using a complex preprocessing method BIBREF1 , BIBREF2 . BIBREF2 not only removed URLs, mentions, retweets, hashtags, numbers and stop words; but also tokenized the tweets and added not_ on negative words. BIBREF1 tokenized the tweets and separated URLs, emoticons, phone numbers, HTML tags, mentions, hashtags, fraction or decimals, and symbol or Unicode character repetition. This research analyzes sentiment on tweets about 2016 U.S. Presidential candidates. We will build a Naive Bayesian predictive model for each candidate and compare the prediction with RealClearPolitics.com. We expect to have a correct prediction on the leading candidates for Democratic and Republican Party. We prove that using a simpler preprocessing method can still have comparable performance to the best performing recent study BIBREF1 ."
],
"extractive_spans": [
"Naive Bayes Classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"The third study built a system for real-time sentiment analysis on 2012 U.S. Presidential Election to show public opinion about each candidate on Twitter BIBREF1 .",
" They trained a Naive Bayes Classifier using 17,000 tweets which consists of 4 classes: (1) positive; (2) negative; (3) neutral; and (4) unsure. It achieved 59% accuracy, which is the best performance achieved in the three recent studies.",
"We prove that using a simpler preprocessing method can still have comparable performance to the best performing recent study BIBREF1 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"30774d2016f1f560589723bedf30164446457423",
"922b3cae507dda810c070afdfa8257ca3c266c36"
],
"answer": [
{
"evidence": [
"We preprocess the data by: (1) removing URLs and pictures, also (2) by filtering tweets which have candidates' name. Hashtags, mentions, and retweets are not removed in order to maintain the original meaning of a tweet. We only save tweets which have passed the two requirements such as in Table 1. The first example shows no change in the tweet's content, since there isn't any URLs or pictures, and it contains a candidate's name: Bernie Sanders. The second example shows a removed tweet, which doesn't contain any candidates' name. The preprocessing stage changes the third tweet's contents. It removes the URLs and still keeps the tweet because it contains \"Hillary Clinton\" and \"Donald Trump\". The preprocessing stage removes 41% of the data (Figure 2)."
],
"extractive_spans": [],
"free_form_answer": "Tweets without candidate names are removed, URLs and pictures are removed from the tweets that remain.",
"highlighted_evidence": [
"We preprocess the data by: (1) removing URLs and pictures, also (2) by filtering tweets which have candidates' name. Hashtags, mentions, and retweets are not removed in order to maintain the original meaning of a tweet."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We preprocess the data by: (1) removing URLs and pictures, also (2) by filtering tweets which have candidates' name. Hashtags, mentions, and retweets are not removed in order to maintain the original meaning of a tweet. We only save tweets which have passed the two requirements such as in Table 1. The first example shows no change in the tweet's content, since there isn't any URLs or pictures, and it contains a candidate's name: Bernie Sanders. The second example shows a removed tweet, which doesn't contain any candidates' name. The preprocessing stage changes the third tweet's contents. It removes the URLs and still keeps the tweet because it contains \"Hillary Clinton\" and \"Donald Trump\". The preprocessing stage removes 41% of the data (Figure 2)."
],
"extractive_spans": [
"(1) removing URLs and pictures",
"(2) by filtering tweets which have candidates' name"
],
"free_form_answer": "",
"highlighted_evidence": [
"We preprocess the data by: (1) removing URLs and pictures, also (2) by filtering tweets which have candidates' name. Hashtags, mentions, and retweets are not removed in order to maintain the original meaning of a tweet."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what are the other methods they compare to?",
"what preprocessing method is introduced?"
],
"question_id": [
"65b39676db60f914f29f74b7c1264422ee42ad5c",
"a2baa8e266318f23f43321c4b2b9cf467718c94a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Fig. 1. Collected Tweets Distribution (μ = 37, 126.4;σ = 27, 823.82)",
"TABLE I TWEETS EXAMPLE ON PREPROCESSING STAGE",
"Fig. 2. Removed tweets on preprocessing stage (μ = 40.87%;σ = 4.98%)",
"Fig. 3. Sentiment Distribution by Candidates",
"TABLE II MODEL TEST RESULTS",
"TABLE IV PREDICTION ERROR ON REPUBLICAN CANDIDATES.",
"Fig. 4. Model evaluation using n first tweets."
],
"file": [
"2-Figure1-1.png",
"2-TableI-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-TableII-1.png",
"4-TableIV-1.png",
"5-Figure4-1.png"
]
} | [
"what preprocessing method is introduced?"
] | [
[
"1611.00440-Data Preprocessing-0"
]
] | [
"Tweets without candidate names are removed, URLs and pictures are removed from the tweets that remain."
] | 209 |
1702.02367 | Iterative Multi-document Neural Attention for Multiple Answer Prediction | People have information needs of varying complexity, which can be solved by an intelligent agent able to answer questions formulated in a proper way, eventually considering user context and preferences. In a scenario in which the user profile can be considered as a question, intelligent agents able to answer questions can be used to find the most relevant answers for a given user. In this work we propose a novel model based on Artificial Neural Networks to answer questions with multiple answers by exploiting multiple facts retrieved from a knowledge base. The model is evaluated on the factoid Question Answering and top-n recommendation tasks of the bAbI Movie Dialog dataset. After assessing the performance of the model on both tasks, we try to define the long-term goal of a conversational recommender system able to interact using natural language and to support users in their information seeking processes in a personalized way. | {
"paragraphs": [
[
"We are surrounded by a huge variety of technological artifacts which “live” with us today. These artifacts can help us in several ways because they have the power to accomplish complex and time-consuming tasks. Unfortunately, common software systems can do for us only specific types of tasks, in a strictly algorithmic way which is pre-defined by the software designer. Machine Learning (ML), a branch of Artificial Intelligence (AI), gives machines the ability to learn to complete tasks without being explicitly programmed.",
"People have information needs of varying complexity, ranging from simple questions about common facts which can be found in encyclopedias, to more sophisticated cases in which they need to know what movie to watch during a romantic evening. These tasks can be solved by an intelligent agent able to answer questions formulated in a proper way, eventually considering user context and preferences.",
"Question Answering (QA) emerged in the last decade as one of the most promising fields in AI, since it allows to design intelligent systems which are able to give correct answers to user questions expressed in natural language. Whereas, recommender systems produce individualized recommendations as output and have the effect of guiding the user in a personalized way to interesting or useful objects in a large space of possible options. In a scenario in which the user profile (the set of user preferences) can be represented by a question, intelligent agents able to answer questions can be used to find the most appealing items for a given user, which is the classical task that recommender systems can solve. Despite the efficacy of classical recommender systems, generally they are not able to handle a conversation with the user so they miss the possibility of understanding his contextual information, emotions and feedback to refine the user profile and provide enhanced suggestions. Conversational recommender systems assist online users in their information-seeking and decision making tasks by supporting an interactive process BIBREF0 which could be goal oriented with the task of starting general and, through a series of interaction cycles, narrowing down the user interests until the desired item is obtained BIBREF1 .",
"In this work we propose a novel model based on Artificial Neural Networks to answer questions exploiting multiple facts retrieved from a knowledge base and evaluate it on a QA task. Moreover, the effectiveness of the model is evaluated on the top-n recommendation task, where the aim of the system is to produce a list of suggestions ranked according to the user preferences. After having assessed the performance of the model on both tasks, we try to define the long-term goal of a conversational recommender system able to interact with the user using natural language and to support him in the information seeking process in a personalized way.",
"In order to fulfill our long-term goal of building a conversational recommender system we need to assess the performance of our model on specific tasks involved in this scenario. A recent work which goes in this direction is reported in BIBREF2 , which presents the bAbI Movie Dialog dataset, composed by different tasks such as factoid QA, top-n recommendation and two more complex tasks, one which mixes QA and recommendation and one which contains turns of dialogs taken from Reddit. Having more specific tasks like QA and recommendation, and a more complex one which mixes both tasks gives us the possibility to evaluate our model on different levels of granularity. Moreover, the subdivision in turns of the more complex task provides a proper benchmark of the model capability to handle an effective dialog with the user.",
"For the task related to QA, a lot of datasets have been released in order to assess the machine reading and comprehension capabilities and a lot of neural network-based models have been proposed. Our model takes inspiration from BIBREF3 , which is able to answer Cloze-style BIBREF4 questions repeating an attention mechanism over the query and the documents multiple times. Despite the effectiveness on the Cloze-style task, the original model does not consider multiple documents as a source of information to answer questions, which is fundamental in order to extract the answer from different relevant facts. The restricted assumption that the answer is contained in the given document does not allow the model to provide an answer which does not belong to the document. Moreover, this kind of task does not expect multiple answers for a given question, which is important for the complex information needs required for a conversational recommender system.",
"According to our vision, the main outcomes of our work can be considered as building blocks for a conversational recommender system and can be summarized as follows:",
"The paper is organized as follows: Section SECREF2 describes our model, while Section SECREF3 summarizes the evaluation of the model on the two above-mentioned tasks and the comparison with respect to state-of-the-art approaches. Section SECREF4 gives an overview of the literature of both QA and recommender systems, while final remarks and our long-term vision are reported in Section SECREF5 ."
],
[
"Given a query INLINEFORM0 , an operator INLINEFORM1 that produces the set of documents relevant for INLINEFORM2 , where INLINEFORM3 is the set of all queries and INLINEFORM4 is the set of all documents. Our model defines a workflow in which a sequence of inference steps are performed in order to extract relevant information from INLINEFORM5 to generate the answers for INLINEFORM6 .",
"Following BIBREF3 , our workflow consists of three steps: (1) the encoding phase, which generates meaningful representations for query and documents; (2) the inference phase, which extracts relevant semantic relationships between the query and the documents by using an iterative attention mechanism and finally (3) the prediction phase, which generates a score for each candidate answer."
],
[
"The input of the encoding phase is given by a query INLINEFORM0 and a set of documents INLINEFORM1 . Both queries and documents are represented by a sequence of words INLINEFORM2 , drawn from a vocabulary INLINEFORM3 . Each word is represented by a continuous INLINEFORM4 -dimensional word embedding INLINEFORM5 stored in a word embedding matrix INLINEFORM6 .",
"The sequences of dense representations for INLINEFORM0 and INLINEFORM1 are encoded using a bidirectional recurrent neural network encoder with Gated Recurrent Units (GRU) as in BIBREF3 which represents each word INLINEFORM2 as the concatenation of a forward encoding INLINEFORM3 and a backward encoding INLINEFORM4 . From now on, we denote the contextual representation for the word INLINEFORM5 by INLINEFORM6 and the contextual representation for the word INLINEFORM7 in the document INLINEFORM8 by INLINEFORM9 . Differently from BIBREF3 , we build a unique representation for the whole set of documents INLINEFORM10 related to the query INLINEFORM11 by stacking each contextual representation INLINEFORM12 obtaining a matrix INLINEFORM13 , where INLINEFORM14 ."
],
[
"This phase uncovers a possible inference chain which models meaningful relationships between the query and the set of related documents. The inference chain is obtained by performing, for each inference step INLINEFORM0 , the attention mechanisms given by the query attentive read and the document attentive read keeping a state of the inference process given by an additional recurrent neural network with GRU units. In this way, the network is able to progressively refine the attention weights focusing on the most relevant tokens of the query and the documents which are exploited by the prediction neural network to select the correct answers among the candidate ones.",
"Given the contextual representations for the query words INLINEFORM0 and the inference GRU state INLINEFORM1 , we obtain a refined query representation INLINEFORM2 (query glimpse) by performing an attention mechanism over the query at inference step INLINEFORM3 : INLINEFORM4 ",
"where INLINEFORM0 are the attention weights associated to the query words, INLINEFORM1 and INLINEFORM2 are respectively a weight matrix and a bias vector which are used to perform the bilinear product with the query token representations INLINEFORM3 . The attention weights can be interpreted as the relevance scores for each word of the query dependent on the inference state INLINEFORM4 at the current inference step INLINEFORM5 .",
"Given the query glimpse INLINEFORM0 and the inference GRU state INLINEFORM1 , we perform an attention mechanism over the contextual representations for the words of the stacked documents INLINEFORM2 : INLINEFORM3 ",
"where INLINEFORM0 is the INLINEFORM1 -th row of INLINEFORM2 , INLINEFORM3 are the attention weights associated to the document words, INLINEFORM4 and INLINEFORM5 are respectively a weight matrix and a bias vector which are used to perform the bilinear product with the document token representations INLINEFORM6 . The attention weights can be interpreted as the relevance scores for each word of the documents conditioned on both the query glimpse and the inference state INLINEFORM7 at the current inference step INLINEFORM8 . By combining the set of relevant documents in INLINEFORM9 , we obtain the probability distribution ( INLINEFORM10 ) over all the relevant document tokens using the above-mentioned attention mechanism.",
"The inference GRU state at the inference step INLINEFORM0 is updated according to INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are the results of a gating mechanism obtained by evaluating INLINEFORM4 for the query and the documents, respectively. The gating function INLINEFORM5 is defined as a 2-layer feed-forward neural network with a Rectified Linear Unit (ReLU) BIBREF5 activation function in the hidden layer and a sigmoid activation function in the output layer. The purpose of the gating mechanism is to retain useful information for the inference process about query and documents and forget useless one."
],
[
"The prediction phase, which is completely different from the pointer-sum loss reported in BIBREF3 , is able to generate, given the query INLINEFORM0 , a relevance score for each candidate answer INLINEFORM1 by using the document attention weights INLINEFORM2 computed in the last inference step INLINEFORM3 . The relevance score of each word INLINEFORM4 is obtained by summing the attention weights of INLINEFORM5 in each document related to INLINEFORM6 . Formally the relevance score for a given word INLINEFORM7 is defined as: INLINEFORM8 ",
"where INLINEFORM0 returns 0 if INLINEFORM1 , INLINEFORM2 otherwise; INLINEFORM3 returns the word in position INLINEFORM4 of the stacked documents matrix INLINEFORM5 and INLINEFORM6 returns the frequency of the word INLINEFORM7 in the documents INLINEFORM8 related to the query INLINEFORM9 . The relevance score takes into account the importance of token occurrences in the considered documents given by the computed attention weights. Moreover, the normalization term INLINEFORM10 is applied to the relevance score in order to mitigate the weight associated to highly frequent tokens.",
"The evaluated relevance scores are concatenated in a single vector representation INLINEFORM0 which is given in input to the answer prediction neural network defined as: INLINEFORM1 ",
"where INLINEFORM0 is the hidden layer size, INLINEFORM1 and INLINEFORM2 are weight matrices, INLINEFORM3 , INLINEFORM4 are bias vectors, INLINEFORM5 is the sigmoid function and INLINEFORM6 is the ReLU activation function, which are applied pointwise to the given input vector.",
"The neural network weights are supposed to learn latent features which encode relationships between the most relevant words for the given query to predict the correct answers. The outer sigmoid activation function is used to treat the problem as a multi-label classification problem, so that each candidate answer is independent and not mutually exclusive. In this way the neural network generates a score which represents the probability that the candidate answer is correct. Moreover, differently from BIBREF3 , the candidate answer INLINEFORM0 can be any word, even those which not belong to the documents related to the query.",
"The model is trained by minimizing the binary cross-entropy loss function comparing the neural network output INLINEFORM0 with the target answers for the given query INLINEFORM1 represented as a binary vector, in which there is a 1 in the corresponding position of the correct answer, 0 otherwise."
],
[
"The model performance is evaluated on the QA and Recs tasks of the bAbI Movie Dialog dataset using HITS@k evaluation metric, which is equal to the number of correct answers in the top- INLINEFORM0 results. In particular, the performance for the QA task is evaluated according to HITS@1, while the performance for the Recs task is evaluated according to HITS@100.",
"Differently from BIBREF2 , the relevant knowledge base facts, taken from the knowledge base in triple form distributed with the dataset, are retrieved by INLINEFORM0 implemented by exploiting the Elasticsearch engine and not according to an hash lookup operator which applies a strict filtering procedure based on word frequency. In our work, INLINEFORM1 returns at most the top 30 relevant facts for INLINEFORM2 . Each entity in questions and documents is recognized using the list of entities provided with the dataset and considered as a single word of the dictionary INLINEFORM3 .",
"Questions, answers and documents given in input to the model are preprocessed using the NLTK toolkit BIBREF6 performing only word tokenization. The question given in input to the INLINEFORM0 operator is preprocessed performing word tokenization and stopword removal.",
"The optimization method and tricks are adopted from BIBREF3 . The model is trained using ADAM BIBREF7 optimizer (learning rate= INLINEFORM0 ) with a batch size of 128 for at most 100 epochs considering the best model until the HITS@k on the validation set decreases for 5 consecutive times. Dropout BIBREF8 is applied on INLINEFORM1 and on INLINEFORM2 with a rate of INLINEFORM3 and on the prediction neural network hidden layer with a rate of INLINEFORM4 . L2 regularization is applied to the embedding matrix INLINEFORM5 with a coefficient equal to INLINEFORM6 . We clipped the gradients if their norm is greater than 5 to stabilize learning BIBREF9 . Embedding size INLINEFORM7 is fixed to 50. All GRU output sizes are fixed to 128. The number of inference steps INLINEFORM8 is set to 3. The size of the prediction neural network hidden layer INLINEFORM9 is fixed to 4096. Biases INLINEFORM10 and INLINEFORM11 are initialized to zero vectors. All weight matrices are initialized sampling from the normal distribution INLINEFORM12 . The ReLU activation function in the prediction neural network has been experimentally chosen comparing different activation functions such as sigmoid and tanh and taking the one which leads to the best performance. The model is implemented in TensorFlow BIBREF10 and executed on an NVIDIA TITAN X GPU.",
"Following the experimental design, the results in Table TABREF10 are promising because our model outperforms all other systems on both tasks except for the QA SYSTEM on the QA task. Despite the advantage of the QA SYSTEM, it is a carefully designed system to handle knowledge base data in the form of triples, but our model can leverage data in the form of documents, without making any assumption about the form of the input data and can be applied to different kind of tasks. Additionally, the model MEMN2N is a neural network whose weights are pre-trained on the same dataset without using the long-term memory and the models JOINT SUPERVISED EMBEDDINGS and JOINT MEMN2N are models trained across all the tasks of the dataset in order to boost performance. Despite that, our model outperforms the three above-mentioned ones without using any supplementary trick. Even though our model performance is higher than all the others on the Recs task, we believe that the obtained result may be improved and so we plan a further investigation. Moreover, the need for further investigation can be justified by the work reported in BIBREF11 which describes some issues regarding the Recs task.",
"Figure FIGREF11 shows the attention weights computed in the last inference step of the iterative attention mechanism used by the model to answer to a given question. Attention weights, represented as red boxes with variable color shades around the tokens, can be used to interpret the reasoning mechanism applied by the model because higher shades of red are associated to more relevant tokens on which the model focus its attention. It is worth to notice that the attention weights associated to each token are the result of the inference mechanism uncovered by the model which progressively tries to focus on the relevant aspects of the query and the documents which are exploited to generate the answers.",
"Given the question “what does Larenz Tate act in?” shown in the above-mentioned figure, the model is able to understand that “Larenz Tate” is the subject of the question and “act in” represents the intent of the question. Reading the related documents, the model associates higher attention weights to the most relevant tokens needed to answer the question, such as “The Postman”, “A Man Apart” and so on."
],
[
"We think that it is necessary to consider models and techniques coming from research both in QA and recommender systems in order to pursue our desire to build an intelligent agent able to assist the user in decision-making tasks. We cannot fill the gap between the above-mentioned research areas if we do not consider the proposed models in a synergic way by virtue of the proposed analogy between the user profile (the set of user preferences) and the items to be recommended, as the question and the correct answers. The first work which goes in this direction is reported in BIBREF12 , which exploits movie descriptions to suggest appealing movies for a given user using an architecture tipically used for QA tasks. In fact, most of the research in the recommender systems field presents ad-hoc systems which exploit neighbourhood information like in Collaborative Filtering techniques BIBREF13 , item descriptions and metadata like in Content-based systems BIBREF14 . Recently presented neural network models BIBREF15 , BIBREF16 systems are able to learn latent representations in the network weights leveraging information coming from user preferences and item information.",
"In recent days, a lot of effort is devoted to create benchmarks for artificial agents to assess their ability to comprehend natural language and to reason over facts. One of the first attempt is the bAbI BIBREF17 dataset which is a synthetic dataset containing elementary tasks such as selecting an answer between one or more candidate facts, answering yes/no questions, counting operations over lists and sets and basic induction and deduction tasks. Another relevant benchmark is the one described in BIBREF18 , which provides CNN/Daily Mail datasets consisting of document-query-answer triples where an entity in the query is replaced by a placeholder and the system should identify the correct entity by reading and comprehending the given document. MCTest BIBREF19 requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Finally, SQuAD BIBREF20 consists in a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.",
"According to the experimental evaluations conducted on the above-mentioned datasets, high-level performance can be obtained exploiting complex attention mechanisms which are able to focus on relevant evidences in the processed content. One of the earlier approaches used to solve these tasks is given by the general Memory Network BIBREF21 , BIBREF22 framework which is one of the first neural network models able to access external memories to extract relevant information through an attention mechanism and to use them to provide the correct answer. A deep Recurrent Neural Network with Long Short-Term Memory units is presented in BIBREF18 , which solves CNN/Daily Mail datasets by designing two different attention mechanisms called Impatient Reader and Attentive Reader. Another way to incorporate attention in neural network models is proposed in BIBREF23 which defines a pointer-sum loss whose aim is to maximize the attention weights which lead to the correct answer."
],
[
"In this work we propose a novel model based on Artificial Neural Networks to answer questions with multiple answers by exploiting multiple facts retrieved from a knowledge base. The proposed model can be considered a relevant building block of a conversational recommender system. Differently from BIBREF3 , our model can consider multiple documents as a source of information in order to generate multiple answers which may not belong to the documents. As presented in this work, common tasks such as QA and top-n recommendation can be solved effectively by our model.",
"In a common recommendation system scenario, when a user enters a search query, it is assumed that his preferences are known. This is a stringent requirement because users cannot have a clear idea of their preferences at that point. Conversational recommender systems support users to fulfill their information needs through an interactive process. In this way, the system can provide a personalized experience dynamically adapting the user model with the possibility to enhance the generated predictions. Moreover, the system capability can be further enhanced giving explanations to the user about the given suggestions.",
"To reach our goal, we should improve our model by designing a INLINEFORM0 operator able to return relevant facts recognizing the most relevant information in the query, by exploiting user preferences and contextual information to learn the user model and by providing a mechanism which leverages attention weights to give explanations. In order to effectively train our model, we plan to collect real dialog data containing contextual information associated to each user and feedback for each dialog which represents if the user is satisfied with the conversation. Given these enhancements, we should design a system able to hold effectively a dialog with the user recognizing his intent and providing him the most suitable contents.",
"With this work we try to show the effectiveness of our architecture for tasks which go from pure question answering to top-n recommendation through an experimental evaluation without any assumption on the task to be solved. To do that, we do not use any hand-crafted linguistic features but we let the system learn and leverage them in the inference process which leads to the answers through multiple reasoning steps. During these steps, the system understands relevant relationships between question and documents without relying on canonical matching, but repeating an attention mechanism able to unconver related aspects in distributed representations, conditioned on an encoding of the inference process given by another neural network. Equipping agents with a reasoning mechanism like the one described in this work and exploiting the ability of neural network models to learn from data, we may be able to create truly intelligent agents."
],
[
"This work is supported by the IBM Faculty Award \"Deep Learning to boost Cognitive Question Answering\". The Titan X GPU used for this research was donated by the NVIDIA Corporation."
]
],
"section_name": [
"Motivation and Background",
"Methodology",
"Encoding phase",
"Inference phase",
"Prediction phase",
"Experimental evaluation",
"Related work",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"4354161cb1762b3546fac80de9a189e1913067bc",
"749e74826ef1d56e7d1c27871e83c6c3d746790c"
],
"answer": [
{
"evidence": [
"The model performance is evaluated on the QA and Recs tasks of the bAbI Movie Dialog dataset using HITS@k evaluation metric, which is equal to the number of correct answers in the top- INLINEFORM0 results. In particular, the performance for the QA task is evaluated according to HITS@1, while the performance for the Recs task is evaluated according to HITS@100.",
"Following the experimental design, the results in Table TABREF10 are promising because our model outperforms all other systems on both tasks except for the QA SYSTEM on the QA task. Despite the advantage of the QA SYSTEM, it is a carefully designed system to handle knowledge base data in the form of triples, but our model can leverage data in the form of documents, without making any assumption about the form of the input data and can be applied to different kind of tasks. Additionally, the model MEMN2N is a neural network whose weights are pre-trained on the same dataset without using the long-term memory and the models JOINT SUPERVISED EMBEDDINGS and JOINT MEMN2N are models trained across all the tasks of the dataset in order to boost performance. Despite that, our model outperforms the three above-mentioned ones without using any supplementary trick. Even though our model performance is higher than all the others on the Recs task, we believe that the obtained result may be improved and so we plan a further investigation. Moreover, the need for further investigation can be justified by the work reported in BIBREF11 which describes some issues regarding the Recs task.",
"FLOAT SELECTED: Table 1: Comparison between our model and baselines from [6] on the QA and Recs tasks evaluated according to HITS@1 and HITS@100, respectively."
],
"extractive_spans": [],
"free_form_answer": "Their model achieves 30.0 HITS@100 on the recommendation task, more than any other baseline",
"highlighted_evidence": [
"In particular, the performance for the QA task is evaluated according to HITS@1, while the performance for the Recs task is evaluated according to HITS@100.",
"Following the experimental design, the results in Table TABREF10 are promising because our model outperforms all other systems on both tasks except for the QA SYSTEM on the QA task.",
"FLOAT SELECTED: Table 1: Comparison between our model and baselines from [6] on the QA and Recs tasks evaluated according to HITS@1 and HITS@100, respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following the experimental design, the results in Table TABREF10 are promising because our model outperforms all other systems on both tasks except for the QA SYSTEM on the QA task. Despite the advantage of the QA SYSTEM, it is a carefully designed system to handle knowledge base data in the form of triples, but our model can leverage data in the form of documents, without making any assumption about the form of the input data and can be applied to different kind of tasks. Additionally, the model MEMN2N is a neural network whose weights are pre-trained on the same dataset without using the long-term memory and the models JOINT SUPERVISED EMBEDDINGS and JOINT MEMN2N are models trained across all the tasks of the dataset in order to boost performance. Despite that, our model outperforms the three above-mentioned ones without using any supplementary trick. Even though our model performance is higher than all the others on the Recs task, we believe that the obtained result may be improved and so we plan a further investigation. Moreover, the need for further investigation can be justified by the work reported in BIBREF11 which describes some issues regarding the Recs task.",
"FLOAT SELECTED: Table 1: Comparison between our model and baselines from [6] on the QA and Recs tasks evaluated according to HITS@1 and HITS@100, respectively."
],
"extractive_spans": [],
"free_form_answer": "Proposed model achieves HITS@100 of 30.0 compared to best baseline model result of 29.2 on recommendation task.",
"highlighted_evidence": [
"Following the experimental design, the results in Table TABREF10 are promising because our model outperforms all other systems on both tasks except for the QA SYSTEM on the QA task.",
"Even though our model performance is higher than all the others on the Recs task, we believe that the obtained result may be improved and so we plan a further investigation.",
"FLOAT SELECTED: Table 1: Comparison between our model and baselines from [6] on the QA and Recs tasks evaluated according to HITS@1 and HITS@100, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"00cd0c65b56362601539b021988bcc295a79ee7d",
"ddbb6f13ca31e34bd8936af6e62ae4e3582fb89e"
],
"answer": [
{
"evidence": [
"The model performance is evaluated on the QA and Recs tasks of the bAbI Movie Dialog dataset using HITS@k evaluation metric, which is equal to the number of correct answers in the top- INLINEFORM0 results. In particular, the performance for the QA task is evaluated according to HITS@1, while the performance for the Recs task is evaluated according to HITS@100.",
"Differently from BIBREF2 , the relevant knowledge base facts, taken from the knowledge base in triple form distributed with the dataset, are retrieved by INLINEFORM0 implemented by exploiting the Elasticsearch engine and not according to an hash lookup operator which applies a strict filtering procedure based on word frequency. In our work, INLINEFORM1 returns at most the top 30 relevant facts for INLINEFORM2 . Each entity in questions and documents is recognized using the list of entities provided with the dataset and considered as a single word of the dictionary INLINEFORM3 ."
],
"extractive_spans": [
"bAbI Movie Dialog dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"The model performance is evaluated on the QA and Recs tasks of the bAbI Movie Dialog dataset using HITS@k evaluation metric, which is equal to the number of correct answers in the top- INLINEFORM0 results.",
"Differently from BIBREF2 , the relevant knowledge base facts, taken from the knowledge base in triple form distributed with the dataset, are retrieved by INLINEFORM0 implemented by exploiting the Elasticsearch engine and not according to an hash lookup operator which applies a strict filtering procedure based on word frequency."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"b329a399ced25f53969117d06f98ae60f47e99f7",
"e861d5d01ad596b46ba66bd0ddbbcc19351b88f5"
],
"answer": [
{
"evidence": [
"The sequences of dense representations for INLINEFORM0 and INLINEFORM1 are encoded using a bidirectional recurrent neural network encoder with Gated Recurrent Units (GRU) as in BIBREF3 which represents each word INLINEFORM2 as the concatenation of a forward encoding INLINEFORM3 and a backward encoding INLINEFORM4 . From now on, we denote the contextual representation for the word INLINEFORM5 by INLINEFORM6 and the contextual representation for the word INLINEFORM7 in the document INLINEFORM8 by INLINEFORM9 . Differently from BIBREF3 , we build a unique representation for the whole set of documents INLINEFORM10 related to the query INLINEFORM11 by stacking each contextual representation INLINEFORM12 obtaining a matrix INLINEFORM13 , where INLINEFORM14 .",
"This phase uncovers a possible inference chain which models meaningful relationships between the query and the set of related documents. The inference chain is obtained by performing, for each inference step INLINEFORM0 , the attention mechanisms given by the query attentive read and the document attentive read keeping a state of the inference process given by an additional recurrent neural network with GRU units. In this way, the network is able to progressively refine the attention weights focusing on the most relevant tokens of the query and the documents which are exploited by the prediction neural network to select the correct answers among the candidate ones."
],
"extractive_spans": [
"bidirectional recurrent neural network encoder with Gated Recurrent Units (GRU)",
"additional recurrent neural network with GRU units"
],
"free_form_answer": "",
"highlighted_evidence": [
"The sequences of dense representations for INLINEFORM0 and INLINEFORM1 are encoded using a bidirectional recurrent neural network encoder with Gated Recurrent Units (GRU) as in BIBREF3 which represents each word INLINEFORM2 as the concatenation of a forward encoding INLINEFORM3 and a backward encoding INLINEFORM4 .",
"The inference chain is obtained by performing, for each inference step INLINEFORM0 , the attention mechanisms given by the query attentive read and the document attentive read keeping a state of the inference process given by an additional recurrent neural network with GRU units."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The sequences of dense representations for INLINEFORM0 and INLINEFORM1 are encoded using a bidirectional recurrent neural network encoder with Gated Recurrent Units (GRU) as in BIBREF3 which represents each word INLINEFORM2 as the concatenation of a forward encoding INLINEFORM3 and a backward encoding INLINEFORM4 . From now on, we denote the contextual representation for the word INLINEFORM5 by INLINEFORM6 and the contextual representation for the word INLINEFORM7 in the document INLINEFORM8 by INLINEFORM9 . Differently from BIBREF3 , we build a unique representation for the whole set of documents INLINEFORM10 related to the query INLINEFORM11 by stacking each contextual representation INLINEFORM12 obtaining a matrix INLINEFORM13 , where INLINEFORM14 .",
"This phase uncovers a possible inference chain which models meaningful relationships between the query and the set of related documents. The inference chain is obtained by performing, for each inference step INLINEFORM0 , the attention mechanisms given by the query attentive read and the document attentive read keeping a state of the inference process given by an additional recurrent neural network with GRU units. In this way, the network is able to progressively refine the attention weights focusing on the most relevant tokens of the query and the documents which are exploited by the prediction neural network to select the correct answers among the candidate ones."
],
"extractive_spans": [
"Gated Recurrent Units"
],
"free_form_answer": "",
"highlighted_evidence": [
"The sequences of dense representations for INLINEFORM0 and INLINEFORM1 are encoded using a bidirectional recurrent neural network encoder with Gated Recurrent Units (GRU) as in BIBREF3 which represents each word INLINEFORM2 as the concatenation of a forward encoding INLINEFORM3 and a backward encoding INLINEFORM4 . ",
" The inference chain is obtained by performing, for each inference step INLINEFORM0 , the attention mechanisms given by the query attentive read and the document attentive read keeping a state of the inference process given by an additional recurrent neural network with GRU units."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How well does their model perform on the recommendation task?",
"Which knowledge base do they use to retrieve facts?",
"Which neural network architecture do they use?"
],
"question_id": [
"97ff88c31dac9a3e8041a77fa7e34ce54eef5a76",
"272defe245d1c5c091d3bc51399181da2da5e5f0",
"860257956b83099cccf1359e5d960289d7d50265"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Comparison between our model and baselines from [6] on the QA and Recs tasks evaluated according to HITS@1 and HITS@100, respectively.",
"Figure 1: Attention weights q̃i and D̃qi computed by the neural network attention mechanisms at the last inference step T for each token. Higher shades correspond to higher relevance scores for the related tokens."
],
"file": [
"5-Table1-1.png",
"6-Figure1-1.png"
]
} | [
"How well does their model perform on the recommendation task?"
] | [
[
"1702.02367-Experimental evaluation-0",
"1702.02367-Experimental evaluation-4",
"1702.02367-5-Table1-1.png"
]
] | [
"Proposed model achieves HITS@100 of 30.0 compared to best baseline model result of 29.2 on recommendation task."
] | 210 |
1607.07514 | Tweet2Vec: Learning Tweet Embeddings Using Character-level CNN-LSTM Encoder-Decoder | We present Tweet2Vec, a novel method for generating general-purpose vector representation of tweets. The model learns tweet embeddings using character-level CNN-LSTM encoder-decoder. We trained our model on 3 million, randomly selected English-language tweets. The model was evaluated using two methods: tweet semantic similarity and tweet sentiment categorization, outperforming the previous state-of-the-art in both tasks. The evaluations demonstrate the power of the tweet embeddings generated by our model for various tweet categorization tasks. The vector representations generated by our model are generic, and hence can be applied to a variety of tasks. Though the model presented in this paper is trained on English-language tweets, the method presented can be used to learn tweet embeddings for different languages. | {
"paragraphs": [
[
"In recent years, the micro-blogging site Twitter has become a major social media platform with hundreds of millions of users. The short (140 character limit), noisy and idiosyncratic nature of tweets make standard information retrieval and data mining methods ill-suited to Twitter. Consequently, there has been an ever growing body of IR and data mining literature focusing on Twitter. However, most of these works employ extensive feature engineering to create task-specific, hand-crafted features. This is time consuming and inefficient as new features need to be engineered for every task.",
"In this paper, we present Tweet2Vec, a method for generating general-purpose vector representation of tweets that can be used for any classification task. Tweet2Vec removes the need for expansive feature engineering and can be used to train any standard off-the-shelf classifier (e.g., logistic regression, svm, etc). Tweet2Vec uses a CNN-LSTM encoder-decoder model that operates at the character level to learn and generate vector representation of tweets. Our method is especially useful for natural language processing tasks on Twitter where it is particularly difficult to engineer features, such as speech-act classification and stance detection (as shown in our previous works on these topics BIBREF0 , BIBREF1 ).",
"There has been several works on generating embeddings for words, most famously Word2Vec by Mikolov et al. BIBREF2 ). There has also been a number of different works that use encoder-decoder models based on long short-term memory (LSTM) BIBREF3 , and gated recurrent neural networks (GRU) BIBREF4 . These methods have been used mostly in the context of machine translation. The encoder maps the sentence from the source language to a vector representation, while the decoder conditions on this encoded vector for translating it to the target language. Perhaps the work most related to ours is the work of Le and Mikolov le2014distributed, where they extended the Word2Vec model to generate representations for sentences (called ParagraphVec). However, these models all function at the word level, making them ill-suited to the extremely noisy and idiosyncratic nature of tweets. Our character-level model, on the other hand, can better deal with the noise and idiosyncrasies in tweets. We plan to make our model and the data used to train it publicly available to be used by other researchers that work with tweets."
],
[
"In this section, we describe the CNN-LSTM encoder-decoder model that operates at the character level and generates vector representation of tweets. The encoder consists of convolutional layers to extract features from the characters and an LSTM layer to encode the sequence of features to a vector representation, while the decoder consists of two LSTM layers which predict the character at each time step from the output of encoder."
],
[
"Character-level CNN (CharCNN) is a slight variant of the deep character-level convolutional neural network introduced by Zhang et al BIBREF5 . In this model, we perform temporal convolutional and temporal max-pooling operations, which computes one-dimensional convolution and pooling functions, respectively, between input and output. Given a discrete input function INLINEFORM0 , a discrete kernel function INLINEFORM1 and stride INLINEFORM2 , the convolution INLINEFORM3 between INLINEFORM4 and INLINEFORM5 and pooling operation INLINEFORM6 of INLINEFORM7 is calculated as: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is an offset constant.",
"We adapted this model, which employs temporal convolution and pooling operations, for tweets. The character set includes the English alphabets, numbers, special characters and unknown character. There are 70 characters in total, given below:",
"abcdefghijklmnopqrstuvwxyz0123456789",
"-,;.!?:'\"/\\|_#$%&^*~`+-=<>()[]{}",
"Each character in the tweets can be encoded using one-hot vector INLINEFORM0 . Hence, the tweets are represented as a binary matrix INLINEFORM1 with padding wherever necessary, where 150 is the maximum number of characters in a tweet (140 tweet characters and padding) and 70 is the size of the character set.",
"Each tweet, in the form of a matrix, is now fed into a deep model consisting of four 1-d convolutional layers. A convolution operation employs a filter INLINEFORM0 , to extract n-gram character feature from a sliding window of INLINEFORM1 characters at the first layer and learns abstract textual features in the subsequent layers. The convolution in the first layer operates on sliding windows of character (size INLINEFORM2 ), and the convolutions in deeper layers are defined in a similar way. Generally, for tweet INLINEFORM3 , a feature INLINEFORM4 at layer INLINEFORM5 is generated by: DISPLAYFORM0 ",
"where INLINEFORM0 , INLINEFORM1 is the bias at layer INLINEFORM2 and INLINEFORM3 is a rectified linear unit.",
"This filter INLINEFORM0 is applied across all possible windows of characters in the tweet to produce a feature map. The output of the convolutional layer is followed by a 1-d max-overtime pooling operation BIBREF6 over the feature map and selects the maximum value as the prominent feature from the current filter. In this way, we apply INLINEFORM1 filters at each layer. Pooling size may vary at each layer (given by INLINEFORM2 at layer INLINEFORM3 ). The pooling operation shrinks the size of the feature representation and filters out trivial features like unnecessary combination of characters. The window length INLINEFORM4 , number of filters INLINEFORM5 , pooling size INLINEFORM6 at each layer are given in Table TABREF6 .",
".93",
".91",
"We define INLINEFORM0 to denote the character-level CNN operation on input tweet matrix INLINEFORM1 . The output from the last convolutional layer of CharCNN(T) (size: INLINEFORM2 ) is subsequently given as input to the LSTM layer. Since LSTM works on sequences (explained in Section SECREF8 and SECREF11 ), pooling operation is restricted to the first two layers of the model (as shown in Table TABREF6 )."
],
[
"In this section we briefly describe the LSTM model BIBREF7 . Given an input sequence INLINEFORM0 ( INLINEFORM1 ), LSTM computes the hidden vector sequence INLINEFORM2 ( INLINEFORM3 ) and and output vector sequence INLINEFORM4 ( INLINEFORM5 ). At each time step, the output of the module is controlled by a set of gates as a function of the previous hidden state INLINEFORM6 and the input at the current time step INLINEFORM7 , the forget gate INLINEFORM8 , the input gate INLINEFORM9 , and the output gate INLINEFORM10 . These gates collectively decide the transitions of the current memory cell INLINEFORM11 and the current hidden state INLINEFORM12 . The LSTM transition functions are defined as follows: DISPLAYFORM0 ",
"Here, INLINEFORM0 is the INLINEFORM1 function that has an output in [0, 1], INLINEFORM2 denotes the hyperbolic tangent function that has an output in INLINEFORM3 , and INLINEFORM4 denotes the component-wise multiplication. The extent to which the information in the old memory cell is discarded is controlled by INLINEFORM5 , while INLINEFORM6 controls the extent to which new information is stored in the current memory cell, and INLINEFORM7 is the output based on the memory cell INLINEFORM8 . LSTM is explicitly designed for learning long-term dependencies, and therefore we choose LSTM after the convolution layer to learn dependencies in the sequence of extracted features. In sequence-to-sequence generation tasks, an LSTM defines a distribution over outputs and sequentially predicts tokens using a softmax function. DISPLAYFORM0 ",
"where INLINEFORM0 is the activation function. For simplicity, we define INLINEFORM1 to denote the LSTM operation on input INLINEFORM2 at time-step INLINEFORM3 and the previous hidden state INLINEFORM4 ."
],
[
"The CNN-LSTM encoder-decoder model draws on the intuition that the sequence of features (e.g. character and word n-grams) extracted from CNN can be encoded into a vector representation using LSTM that can embed the meaning of the whole tweet. Figure FIGREF7 illustrates the complete encoder-decoder model. The input and output to the model are the tweet represented as a matrix where each row is the one-hot vector representation of the characters. The procedure for encoding and decoding is explained in the following section.",
"Given a tweet in the matrix form T (size: INLINEFORM0 ), the CNN (Section SECREF2 ) extracts the features from the character representation. The one-dimensional convolution involves a filter vector sliding over a sequence and detecting features at different positions. The new successive higher-order window representations then are fed into LSTM (Section SECREF8 ). Since LSTM extracts representation from sequence input, we will not apply pooling after convolution at the higher layers of Character-level CNN model. The encoding procedure can be summarized as: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is an extracted feature matrix where each row can be considered as a time-step for the LSTM and INLINEFORM1 is the hidden representation at time-step INLINEFORM2 . LSTM operates on each row of the INLINEFORM3 along with the hidden vectors from previous time-step to produce embedding for the subsequent time-steps. The vector output at the final time-step, INLINEFORM4 , is used to represent the entire tweet. In our case, the size of the INLINEFORM5 is 256.",
"The decoder operates on the encoded representation with two layers of LSTMs. In the initial time-step, the end-to-end output from the encoding procedure is used as the original input into first LSTM layer. The last LSTM decoder generates each character, INLINEFORM0 , sequentially and combines it with previously generated hidden vectors of size 128, INLINEFORM1 , for the next time-step prediction. The prediction of character at each time step is given by: DISPLAYFORM0 ",
"where INLINEFORM0 refers to the character at time-step INLINEFORM1 , INLINEFORM2 represents the one-hot vector of the character at time-step INLINEFORM3 . The result from the softmax is a decoded tweet matrix INLINEFORM4 , which is eventually compared with the actual tweet or a synonym-replaced version of the tweet (explained in Section SECREF3 ) for learning the parameters of the model."
],
[
"We trained the CNN-LSTM encoder-decoder model on 3 million randomly selected English-language tweets populated using data augmentation techniques, which are useful for controlling generalization error for deep learning models. Data augmentation, in our context, refers to replicating tweet and replacing some of the words in the replicated tweets with their synonyms. These synonyms are obtained from WordNet BIBREF8 which contains words grouped together on the basis of their meanings. This involves selection of replaceable words (example of non-replaceable words are stopwords, user names, hash tags, etc) from the tweet and the number of words INLINEFORM0 to be replaced. The probability of the number, INLINEFORM1 , is given by a geometric distribution with parameter INLINEFORM2 in which INLINEFORM3 . Words generally have several synonyms, thus the synonym index INLINEFORM4 , of a given word is also determined by another geometric distribution in which INLINEFORM5 . In our encoder-decoder model, we decode the encoded representation to the actual tweet or a synonym-replaced version of the tweet from the augmented data. We used INLINEFORM6 , INLINEFORM7 for our training. We also make sure that the POS tags of the replaced words are not completely different from the actual words. For regularization, we apply a dropout mechanism after the penultimate layer. This prevents co-adaptation of hidden units by randomly setting a proportion INLINEFORM8 of the hidden units to zero (for our case, we set INLINEFORM9 ).",
"To learn the model parameters, we minimize the cross-entropy loss as the training objective using the Adam Optimization algorithm BIBREF9 . It is given by DISPLAYFORM0 ",
"where p is the true distribution (one-hot vector representing characters in the tweet) and q is the output of the softmax. This, in turn, corresponds to computing the negative log-probability of the true class."
],
[
"We evaluated our model using two classification tasks: Tweet semantic relatedness and Tweet sentiment classification."
],
[
"The first evaluation is based on the SemEval 2015-Task 1: Paraphrase and Semantic Similarity in Twitter BIBREF10 . Given a pair of tweets, the goal is to predict their semantic equivalence (i.e., if they express the same or very similar meaning), through a binary yes/no judgement. The dataset provided for this task contains 18K tweet pairs for training and 1K pairs for testing, with INLINEFORM0 of these pairs being paraphrases, and INLINEFORM1 non-paraphrases. We first extract the vector representation of all the tweets in the dataset using our Tweet2Vec model. We use two features to represent a tweet pair. Given two tweet vectors INLINEFORM2 and INLINEFORM3 , we compute their element-wise product INLINEFORM4 and their absolute difference INLINEFORM5 and concatenate them together (Similar to BIBREF11 ). We then train a logistic regression model on these features using the dataset. Cross-validation is used for tuning the threshold for classification. In contrast to our model, most of the methods used for this task were largely based on extensive use of feature engineering, or a combination of feature engineering with semantic spaces. Table 2 shows the performance of our model compared to the top four models in the SemEval 2015 competition, and also a model that was trained using ParagraphVec. Our model (Tweet2Vec) outperforms all these models, without resorting to extensive task-specific feature engineering.",
".93",
".91"
],
[
"The second evaluation is based on the SemEval 2015-Task 10B: Twitter Message Polarity Classification BIBREF12 . Given a tweet, the task is to classify it as either positive, negative or neutral in sentiment. The size of the training and test sets were 9,520 tweets and 2,380 tweets respectively ( INLINEFORM0 positive, INLINEFORM1 negative, and INLINEFORM2 neutral).",
"As with the last task, we first extract the vector representation of all the tweets in the dataset using Tweet2Vec and use that to train a logistic regression classifier using the vector representations. Even though there are three classes, the SemEval task is a binary task. The performance is measured as the average F1-score of the positive and the negative class. Table 3 shows the performance of our model compared to the top four models in the SemEval 2015 competition (note that only the F1-score is reported by SemEval for this task) and ParagraphVec. Our model outperforms all these models, again without resorting to any feature engineering.",
".93",
".91",
""
],
[
"In this paper, we presented Tweet2Vec, a novel method for generating general-purpose vector representation of tweets, using a character-level CNN-LSTM encoder-decoder architecture. To the best of our knowledge, ours is the first attempt at learning and applying character-level tweet embeddings. Our character-level model can deal with the noisy and peculiar nature of tweets better than methods that generate embeddings at the word level. Our model is also robust to synonyms with the help of our data augmentation technique using WordNet.",
"The vector representations generated by our model are generic, and thus can be applied to tasks of different nature. We evaluated our model using two different SemEval 2015 tasks: Twitter semantic relatedness, and sentiment classification. Simple, off-the-shelf logistic regression classifiers trained using the vector representations generated by our model outperformed the top-performing methods for both tasks, without the need for any extensive feature engineering. This was despite the fact that due to resource limitations, our Tweet2Vec model was trained on a relatively small set (3 million tweets). Also, our method outperformed ParagraphVec, which is an extension of Word2Vec to handle sentences. This is a small but noteworthy illustration of why our tweet embeddings are best-suited to deal with the noise and idiosyncrasies of tweets.",
"For future work, we plan to extend the method to include: 1) Augmentation of data through reordering the words in the tweets to make the model robust to word-order, 2) Exploiting attention mechanism BIBREF13 in our model to improve alignment of words in tweets during decoding, which could improve the overall performance."
]
],
"section_name": [
"Introduction",
"CNN-LSTM Encoder-Decoder",
"Character-Level CNN Tweet Model",
"Long-Short Term Memory (LSTM)",
"The Combined Model",
"Data Augmentation & Training",
"Experiments",
"Semantic Relatedness",
"Sentiment Classification",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"162cbc100d96539e18f3dd8a445837bfdf6d2c4f",
"ce45d91a6a847368a4e18163bcc58acd77be85ad"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0121c6efdd14d05810c7033b5e497ba6e6965d73",
"7be7e3c9071f75e3e3ec97bb5f981b27ac354578"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Results of the paraphrase and semantic similarity in Twitter task.",
"FLOAT SELECTED: Table 3: Results of Twitter sentiment classification task."
],
"extractive_spans": [],
"free_form_answer": "Sentiment classification task by 0,008 F1, and semantic similarity task by 0,003 F1.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Results of the paraphrase and semantic similarity in Twitter task.",
"FLOAT SELECTED: Table 3: Results of Twitter sentiment classification task."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first evaluation is based on the SemEval 2015-Task 1: Paraphrase and Semantic Similarity in Twitter BIBREF10 . Given a pair of tweets, the goal is to predict their semantic equivalence (i.e., if they express the same or very similar meaning), through a binary yes/no judgement. The dataset provided for this task contains 18K tweet pairs for training and 1K pairs for testing, with INLINEFORM0 of these pairs being paraphrases, and INLINEFORM1 non-paraphrases. We first extract the vector representation of all the tweets in the dataset using our Tweet2Vec model. We use two features to represent a tweet pair. Given two tweet vectors INLINEFORM2 and INLINEFORM3 , we compute their element-wise product INLINEFORM4 and their absolute difference INLINEFORM5 and concatenate them together (Similar to BIBREF11 ). We then train a logistic regression model on these features using the dataset. Cross-validation is used for tuning the threshold for classification. In contrast to our model, most of the methods used for this task were largely based on extensive use of feature engineering, or a combination of feature engineering with semantic spaces. Table 2 shows the performance of our model compared to the top four models in the SemEval 2015 competition, and also a model that was trained using ParagraphVec. Our model (Tweet2Vec) outperforms all these models, without resorting to extensive task-specific feature engineering.",
"As with the last task, we first extract the vector representation of all the tweets in the dataset using Tweet2Vec and use that to train a logistic regression classifier using the vector representations. Even though there are three classes, the SemEval task is a binary task. The performance is measured as the average F1-score of the positive and the negative class. Table 3 shows the performance of our model compared to the top four models in the SemEval 2015 competition (note that only the F1-score is reported by SemEval for this task) and ParagraphVec. Our model outperforms all these models, again without resorting to any feature engineering.",
"FLOAT SELECTED: Table 2: Results of the paraphrase and semantic similarity in Twitter task.",
"FLOAT SELECTED: Table 3: Results of Twitter sentiment classification task."
],
"extractive_spans": [],
"free_form_answer": "On paraphrase and semantic similarity proposed model has F1 score of 0.677 compared to best previous model result of 0.674, while on sentiment classification it has 0.656 compared to 0.648 of best previous result.",
"highlighted_evidence": [
"Table 2 shows the performance of our model compared to the top four models in the SemEval 2015 competition, and also a model that was trained using ParagraphVec. Our model (Tweet2Vec) outperforms all these models, without resorting to extensive task-specific feature engineering.",
"Table 3 shows the performance of our model compared to the top four models in the SemEval 2015 competition (note that only the F1-score is reported by SemEval for this task) and ParagraphVec. Our model outperforms all these models, again without resorting to any feature engineering.",
"FLOAT SELECTED: Table 2: Results of the paraphrase and semantic similarity in Twitter task.",
"FLOAT SELECTED: Table 3: Results of Twitter sentiment classification task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"108fedbd9065c7885b3b9640c33493b40e6cde4e",
"f156af41f234a95c3767f9518961e003e7a5cd98"
],
"answer": [
{
"evidence": [
"As with the last task, we first extract the vector representation of all the tweets in the dataset using Tweet2Vec and use that to train a logistic regression classifier using the vector representations. Even though there are three classes, the SemEval task is a binary task. The performance is measured as the average F1-score of the positive and the negative class. Table 3 shows the performance of our model compared to the top four models in the SemEval 2015 competition (note that only the F1-score is reported by SemEval for this task) and ParagraphVec. Our model outperforms all these models, again without resorting to any feature engineering."
],
"extractive_spans": [],
"free_form_answer": "INESC-ID, lsislif, unitn and Webis.",
"highlighted_evidence": [
"Table 3 shows the performance of our model compared to the top four models in the SemEval 2015 competition (note that only the F1-score is reported by SemEval for this task) and ParagraphVec."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Results of Twitter sentiment classification task.",
"As with the last task, we first extract the vector representation of all the tweets in the dataset using Tweet2Vec and use that to train a logistic regression classifier using the vector representations. Even though there are three classes, the SemEval task is a binary task. The performance is measured as the average F1-score of the positive and the negative class. Table 3 shows the performance of our model compared to the top four models in the SemEval 2015 competition (note that only the F1-score is reported by SemEval for this task) and ParagraphVec. Our model outperforms all these models, again without resorting to any feature engineering."
],
"extractive_spans": [],
"free_form_answer": "INESC-ID, lsislif, unitn and Webis.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Results of Twitter sentiment classification task.",
"Table 3 shows the performance of our model compared to the top four models in the SemEval 2015 competition (note that only the F1-score is reported by SemEval for this task) and ParagraphVec."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"daed2916e56adb3d2380c17204599a4237c60e20",
"f53ddc9da3c4ad36713796c68efe4e7b4fe5ff3d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Results of the paraphrase and semantic similarity in Twitter task.",
"The first evaluation is based on the SemEval 2015-Task 1: Paraphrase and Semantic Similarity in Twitter BIBREF10 . Given a pair of tweets, the goal is to predict their semantic equivalence (i.e., if they express the same or very similar meaning), through a binary yes/no judgement. The dataset provided for this task contains 18K tweet pairs for training and 1K pairs for testing, with INLINEFORM0 of these pairs being paraphrases, and INLINEFORM1 non-paraphrases. We first extract the vector representation of all the tweets in the dataset using our Tweet2Vec model. We use two features to represent a tweet pair. Given two tweet vectors INLINEFORM2 and INLINEFORM3 , we compute their element-wise product INLINEFORM4 and their absolute difference INLINEFORM5 and concatenate them together (Similar to BIBREF11 ). We then train a logistic regression model on these features using the dataset. Cross-validation is used for tuning the threshold for classification. In contrast to our model, most of the methods used for this task were largely based on extensive use of feature engineering, or a combination of feature engineering with semantic spaces. Table 2 shows the performance of our model compared to the top four models in the SemEval 2015 competition, and also a model that was trained using ParagraphVec. Our model (Tweet2Vec) outperforms all these models, without resorting to extensive task-specific feature engineering."
],
"extractive_spans": [],
"free_form_answer": "nnfeats, ikr, linearsvm and svckernel.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Results of the paraphrase and semantic similarity in Twitter task.",
"Table 2 shows the performance of our model compared to the top four models in the SemEval 2015 competition, and also a model that was trained using ParagraphVec."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first evaluation is based on the SemEval 2015-Task 1: Paraphrase and Semantic Similarity in Twitter BIBREF10 . Given a pair of tweets, the goal is to predict their semantic equivalence (i.e., if they express the same or very similar meaning), through a binary yes/no judgement. The dataset provided for this task contains 18K tweet pairs for training and 1K pairs for testing, with INLINEFORM0 of these pairs being paraphrases, and INLINEFORM1 non-paraphrases. We first extract the vector representation of all the tweets in the dataset using our Tweet2Vec model. We use two features to represent a tweet pair. Given two tweet vectors INLINEFORM2 and INLINEFORM3 , we compute their element-wise product INLINEFORM4 and their absolute difference INLINEFORM5 and concatenate them together (Similar to BIBREF11 ). We then train a logistic regression model on these features using the dataset. Cross-validation is used for tuning the threshold for classification. In contrast to our model, most of the methods used for this task were largely based on extensive use of feature engineering, or a combination of feature engineering with semantic spaces. Table 2 shows the performance of our model compared to the top four models in the SemEval 2015 competition, and also a model that was trained using ParagraphVec. Our model (Tweet2Vec) outperforms all these models, without resorting to extensive task-specific feature engineering.",
"FLOAT SELECTED: Table 2: Results of the paraphrase and semantic similarity in Twitter task."
],
"extractive_spans": [],
"free_form_answer": "nnfeats, ikr, linearsvm and svckernel.",
"highlighted_evidence": [
"Table 2 shows the performance of our model compared to the top four models in the SemEval 2015 competition, and also a model that was trained using ParagraphVec. ",
"FLOAT SELECTED: Table 2: Results of the paraphrase and semantic similarity in Twitter task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"did they experiment with other languages?",
"by how much did their system outperform previous tasks?",
"what are the previous state of the art for sentiment categorization?",
"what are the previous state of the art for tweet semantic similarity?"
],
"question_id": [
"deb0c3524a3b3707e8b20abd27f54ad6188d6e4e",
"d7e43a3db8616a106304ac04ba729c1fee78761d",
"0ba8f04c3fd64ee543b9b4c022310310bc5d3c23",
"b7d02f12baab5db46ea9403d8932e1cd1b022f79"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Layer Parameters of CharCNN",
"Table 2: Results of the paraphrase and semantic similarity in Twitter task.",
"Table 3: Results of Twitter sentiment classification task."
],
"file": [
"3-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png"
]
} | [
"by how much did their system outperform previous tasks?",
"what are the previous state of the art for sentiment categorization?",
"what are the previous state of the art for tweet semantic similarity?"
] | [
[
"1607.07514-Sentiment Classification-1",
"1607.07514-5-Table3-1.png",
"1607.07514-5-Table2-1.png",
"1607.07514-Semantic Relatedness-0"
],
[
"1607.07514-Sentiment Classification-1",
"1607.07514-5-Table3-1.png"
],
[
"1607.07514-5-Table2-1.png",
"1607.07514-Semantic Relatedness-0"
]
] | [
"On paraphrase and semantic similarity proposed model has F1 score of 0.677 compared to best previous model result of 0.674, while on sentiment classification it has 0.656 compared to 0.648 of best previous result.",
"INESC-ID, lsislif, unitn and Webis.",
"nnfeats, ikr, linearsvm and svckernel."
] | 212 |
1809.03680 | Learning Scripts as Hidden Markov Models | Scripts have been proposed to model the stereotypical event sequences found in narratives. They can be applied to make a variety of inferences including filling gaps in the narratives and resolving ambiguous references. This paper proposes the first formal framework for scripts based on Hidden Markov Models (HMMs). Our framework supports robust inference and learning algorithms, which are lacking in previous clustering models. We develop an algorithm for structure and parameter learning based on Expectation Maximization and evaluate it on a number of natural datasets. The results show that our algorithm is superior to several informed baselines for predicting missing events in partial observation sequences. | {
"paragraphs": [
[
" Scripts were developed as a means of representing stereotypical event sequences and interactions in narratives. The benefits of scripts for encoding common sense knowledge, filling in gaps in a story, resolving ambiguous references, and answering comprehension questions have been amply demonstrated in the early work in natural language understanding BIBREF0 . The earliest attempts to learn scripts were based on explanation-based learning, which can be characterized as example-guided deduction from first principles BIBREF1 , BIBREF2 . While this approach is successful in generalizing from a small number of examples, it requires a strong domain theory, which limits its applicability.",
"More recently, some new graph-based algorithms for inducing script-like structures from text have emerged. “Narrative Chains” is a narrative model similar to Scripts BIBREF3 . Each Narrative Chain is a directed graph indicating the most frequent temporal relationship between the events in the chain. Narrative Chains are learned by a novel application of pairwise mutual information and temporal relation learning. Another graph learning approach employs Multiple Sequence Alignment in conjunction with a semantic similarity function to cluster sequences of event descriptions into a directed graph BIBREF4 . More recently still, graphical models have been proposed for representing script-like knowledge, but these lack the temporal component that is central to this paper and to the early script work. These models instead focus on learning bags of related events BIBREF5 , BIBREF6 .",
"While the above approches demonstrate the learnability of script-like knowledge, they do not offer a probabilistic framework to reason robustly under uncertainty taking into account the temporal order of events. In this paper we present the first formal representation of scripts as Hidden Markov Models (HMMs), which support robust inference and effective learning algorithms. The states of the HMM correspond to event types in scripts, such as entering a restaurant or opening a door. Observations correspond to natural language sentences that describe the event instances that occur in the story, e.g., “John went to Starbucks. He came back after ten minutes.” The standard inference algorithms, such as the Forward-Backward algorithm, are able to answer questions about the hidden states given the observed sentences, for example, “What did John do in Starbucks?”",
"There are two complications that need to be dealt with to adapt HMMs to model narrative scripts. First, both the set of states, i.e., event types, and the set of observations are not pre-specified but are to be learned from data. We assume that the set of possible observations and the set of event types to be bounded but unknown. We employ the clustering algorithm proposed in BIBREF4 to reduce the natural language sentences, i.e., event descriptions, to a small set of observations and states based on their Wordnet similarity.",
"The second complication of narrative texts is that many events may be omitted either in the narration or by the event extraction process. More importantly, there is no indication of a time lapse or a gap in the story, so the standard forward-backward algorithm does not apply. To account for this, we allow the states to skip generating observations with some probability. This kind of HMMs, with insertions and gaps, have been considered previously in speech processing BIBREF7 and in computational biology BIBREF8 . We refine these models by allowing state-dependent missingness, without introducing additional “insert states” or “delete states” as in BIBREF8 . In this paper, we restrict our attention to the so-called “Left-to-Right HMMs” which have acyclic graphical structure with possible self-loops, as they support more efficient inference algorithms than general HMMs and suffice to model most of the natural scripts. We consider the problem of learning the structure and parameters of scripts in the form of HMMs from sequences of natural language sentences. Our solution to script learning is a novel bottom-up method for structure learning, called SEM-HMM, which is inspired by Bayesian Model Merging (BMM) BIBREF9 and Structural Expectation Maximization (SEM) BIBREF10 . It starts with a fully enumerated HMM representation of the event sequences and incrementally merges states and deletes edges to improve the posterior probability of the structure and the parameters given the data. We compare our approach to several informed baselines on many natural datasets and show its superior performance. We believe our work represents the first formalization of scripts that supports probabilistic inference, and paves the way for robust understanding of natural language texts."
],
[
"",
"Consider an activity such as answering the doorbell. An example HMM representation of this activity is illustrated in Figure FIGREF1 . Each box represents a state, and the text within is a set of possible event descriptions (i.e., observations). Each event description is also marked with its conditional probability. Each edge represents a transition from one state to another and is annotated with its conditional probability.",
"In this paper, we consider a special class of HMMs with the following properties. First, we allow some observations to be missing. This is a natural phenomenon in text, where not all events are mentioned or extracted. We call these null observations and represent them with a special symbol INLINEFORM0 . Second, we assume that the states of the HMM can be ordered such that all transitions take place only in that order. These are called Left-to-Right HMMs in the literature BIBREF11 , BIBREF7 . Self-transitions of states are permitted and represent “spurious” observations or events with multi-time step durations. While our work can be generalized to arbitrary HMMs, we find that the Left-to-Right HMMs suffice to model scripts in our corpora. Formally, an HMM is a 4-tuple INLINEFORM1 , where INLINEFORM2 is a set of states, INLINEFORM3 is the probability of transition from INLINEFORM4 to INLINEFORM5 , INLINEFORM6 is a set of possible non-null observations, and INLINEFORM7 is the probability of observing INLINEFORM8 when in state INLINEFORM9 , where INLINEFORM11 , and INLINEFORM12 is the terminal state. An HMM is Left-to-Right if the states of the HMM can be ordered from INLINEFORM13 thru INLINEFORM14 such that INLINEFORM15 is non-zero only if INLINEFORM16 . We assume that our target HMM is Left-to-Right. We index its states according to a topological ordering of the transition graph. An HMM is a generative model of a distribution over sequences of observations. For convenience w.l.o.g. we assume that each time it is “run” to generate a sample, the HMM starts in the same initial state INLINEFORM17 , and goes through a sequence of transitions according to INLINEFORM18 until it reaches the same final state INLINEFORM19 , while emitting an observation in INLINEFORM20 in each state according to INLINEFORM21 . The initial state INLINEFORM22 and the final state INLINEFORM23 respectively emit the distinguished observation symbols, “ INLINEFORM24 ” and “ INLINEFORM25 ” in INLINEFORM26 , which are emitted by no other state. The concatenation of observations in successive states consitutes a sample of the distribution represented by the HMM. Because the null observations are removed from the generated observations, the length of the output string may be smaller than the number of state transitions. It could also be larger than the number of distinct state transitions, since we allow observations to be generated on the self transitions. Thus spurious and missing observations model insertions and deletions in the outputs of HMMs without introducing special states as in profile HMMs BIBREF8 .",
"In this paper we address the following problem. Given a set of narrative texts, each of which describes a stereotypical event sequence drawn from a fixed but unknown distribution, learn the structure and parameters of a Left-to-Right HMM model that best captures the distribution of the event sequences. We evaluate the algorithm on natural datasets by how well the learned HMM can predict observations removed from the test sequences."
],
[
"",
"At the top level, the algorithm is input a set of documents INLINEFORM0 , where each document is a sequence of natural language sentences that describes the same stereotypical activity. The output of the algorithm is a Left-to-Right HMM that represents that activity.",
"Our approach has four main components, which are described in the next four subsections: Event Extraction, Parameter Estimation, Structure Learning, and Structure Scoring. The event extraction step clusters the input sentences into event types and replaces the sentences with the corresponding cluster labels. After extraction, the event sequences are iteratively merged with the current HMM in batches of size INLINEFORM0 starting with an empty HMM. Structure Learning then merges pairs of states (nodes) and removes state transitions (edges) by greedy hill climbing guided by the improvement in approximate posterior probability of the HMM. Once the hill climbing converges to a local optimum, the maxmimum likelihood HMM parameters are re-estimated using the EM procedure based on all the data seen so far. Then the next batch of INLINEFORM1 sequences are processed. We will now describe these steps in more detail."
],
[
"",
"Given a set of sequences of sentences, the event extraction algorithm clusters them into events and arranges them into a tree structured HMM. For this step, we assume that each sentence has a simple structure that consists of a single verb and an object. We make the further simplifying assumption that the sequences of sentences in all documents describe the events in temporal order. Although this assumption is often violated in natural documents, we ignore this problem to focus on script learning. There have been some approaches in previous work that specifically address the problem of inferreing temporal order of events from texts, e.g., see BIBREF12 .",
"Given the above assumptions, following BIBREF4 , we apply a simple agglomerative clustering algorithm that uses a semantic similarity function over sentence pairs INLINEFORM0 given by INLINEFORM1 , where INLINEFORM2 is the verb and INLINEFORM3 is the object in the sentence INLINEFORM4 . Here INLINEFORM5 is the path similarity metric from Wordnet BIBREF13 . It is applied to the first verb (preferring verbs that are not stop words) and to the objects from each pair of sentences. The constants INLINEFORM6 and INLINEFORM7 are tuning parameters that adjust the relative importance of each component. Like BIBREF4 , we found that a high weight on the verb similarity was important to finding meaningful clusters of events. The most frequent verb in each cluster is extracted to name the event type that corresponds to that cluster.",
"The initial configuration of the HMM is a Prefix Tree Acceptor, which is constructed by starting with a single event sequence and then adding sequences by branching the tree at the first place the new sequence differs from it BIBREF14 , BIBREF15 . By repeating this process, an HMM that fully enumerates the data is constructed."
],
[
"",
"In this section we describe our parameter estimation methods. While parameter estimation in this kind of HMM was treated earlier in the literature BIBREF11 , BIBREF7 , we provide a more principled approach to estimate the state-dependent probability of INLINEFORM0 transitions from data without introducing special insert and delete states BIBREF8 . We assume that the structure of the Left-to-Right HMM is fixed based on the preceding structure learning step, which is described in Section SECREF10 .",
"The main difficulty in HMM parameter estimation is that the states of the HMM are not observed. The Expectation-Maximization (EM) procedure (also called the Baum-Welch algorithm in HMMs) alternates between estimating the hidden states in the event sequences by running the Forward-Backward algorithm (the Expectation step) and finding the maximum likelihood estimates (the Maximization step) of the transition and observation parameters of the HMM BIBREF16 . Unfortunately, because of the INLINEFORM0 -transitions the state transitions of our HMM are not necessarily aligned with the observations. Hence we explicitly maintain two indices, the time index INLINEFORM1 and the observation index INLINEFORM2 . We define INLINEFORM3 to be the joint probability that the HMM is in state INLINEFORM4 at time INLINEFORM5 and has made the observations INLINEFORM6 . This is computed by the forward pass of the algorithm using the following recursion. Equations EQREF5 and represent the base case of the recursion, while Equation represents the case for null observations. Note that the observation index INLINEFORM7 of the recursive call is not advanced unlike in the second half of Equation where it is advanced for a normal observation. We exploit the fact that the HMM is Left-to-Right and only consider transitions to INLINEFORM8 from states with indices INLINEFORM9 . The time index INLINEFORM10 is incremented starting 0, and the observation index INLINEFORM11 varies from 0 thru INLINEFORM12 . ",
" DISPLAYFORM0 ",
"",
" The backward part of the standard Forward-Backward algorithm starts from the last time step INLINEFORM0 and reasons backwards. Unfortunately in our setting, we do not know INLINEFORM1 —the true number of state transitions—as some of the observations are missing. Hence, we define INLINEFORM2 as the conditional probability of observing INLINEFORM3 in the remaining INLINEFORM4 steps given that the current state is INLINEFORM5 . This allows us to increment INLINEFORM6 starting from 0 as recursion proceeds, rather than decrementing it from INLINEFORM7 . ",
" DISPLAYFORM0 ",
"",
" Equation EQREF7 calculates the probability of the observation sequence INLINEFORM0 , which is computed by marginalizing INLINEFORM1 over time INLINEFORM2 and state INLINEFORM3 and setting the second index INLINEFORM4 to the length of the observation sequence INLINEFORM5 . The quantity INLINEFORM6 serves as the normalizing factor for the last three equations. DISPLAYFORM0 ",
"",
" Equation , the joint distribution of the state and observation index INLINEFORM0 at time INLINEFORM1 is computed by convolution, i.e., multiplying the INLINEFORM2 and INLINEFORM3 that correspond to the same time step and the same state and marginalizing out the length of the state-sequence INLINEFORM4 . Convolution is necessary, as the length of the state-sequence INLINEFORM5 is a random variable equal to the sum of the corresponding time indices of INLINEFORM6 and INLINEFORM7 .",
"Equation computes the joint probability of a state-transition associated with a null observation by first multiplying the state transition probability by the null observation probability given the state transition and the appropriate INLINEFORM0 and INLINEFORM1 values. It then marginalizes out the observation index INLINEFORM2 . Again we need to compute a convolution with respect to INLINEFORM3 to take into account the variation over the total number of state transitions. Equation calculates the same probability for a non-null observation INLINEFORM4 . This equation is similar to Equation with two differences. First, we ensure that the observation is consistent with INLINEFORM5 by multiplying the product with the indicator function INLINEFORM6 which is 1 if INLINEFORM7 and 0 otherwise. Second, we advance the observation index INLINEFORM8 in the INLINEFORM9 function.",
"Since the equations above are applied to each individual observation sequence, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 all have an implicit index INLINEFORM4 which denotes the observation sequence and has been omitted in the above equations. We will make it explicit below and calculate the expected counts of state visits, state transitions, and state transition observation triples. DISPLAYFORM0 ",
"",
" ",
"Equation EQREF8 counts the total expected number of visits of each state in the data. Also, Equation estimates the expected number of transitions between each state pair. Finally, Equation computes the expected number of observations and state-transitions including null transitions. This concludes the E-step of the EM procedure.",
"The M-step of the EM procedure consists of Maximum Aposteriori (MAP) estimation of the transition and observation distributions is done assuming an uninformative Dirichlet prior. This amounts to adding a pseudocount of 1 to each of the next states and observation symbols. The observation distributions for the initial and final states INLINEFORM0 and INLINEFORM1 are fixed to be the Kronecker delta distributions at their true values. DISPLAYFORM0 ",
"",
" The E-step and the M-step are repeated until convergence of the parameter estimates."
],
[
"",
"We now describe our structure learning algorithm, SEM-HMM. Our algorithm is inspired by Bayesian Model Merging (BMM) BIBREF9 and Structural EM (SEM) BIBREF10 and adapts them to learning HMMs with missing observations. SEM-HMM performs a greedy hill climbing search through the space of acyclic HMM structures. It iteratively proposes changes to the structure either by merging states or by deleting edges. It evaluates each change and makes the one with the best score. An exact implementation of this method is expensive, because, each time a structure change is considered, the MAP parameters of the structure given the data must be re-estimated. One of the key insights of both SEM and BMM is that this expensive re-estimation can be avoided in factored models by incrementally computing the changes to various expected counts using only local information. While this calculation is only approximate, it is highly efficient.",
"During the structure search, the algorithm considers every possible structure change, i.e., merging of pairs of states and deletion of state-transitions, checks that the change does not create cycles, evaluates it according to the scoring function and selects the best scoring structure. This is repeated until the structure can no longer be improved (see Algorithm SECREF10 ).",
"LearnModel INLINEFORM0 , Data INLINEFORM1 , Changes INLINEFORM2 ",
" INLINEFORM0 INLINEFORM1 = AcyclicityFilter INLINEFORM2 INLINEFORM3 ",
" INLINEFORM0 INLINEFORM1 INLINEFORM2 ",
" ",
"The Merge States operator creates a new state from the union of a state pair's transition and observation distributions. It must assign transition and observation distributions to the new merged state. To be exact, we need to redo the parameter estimation for the changed structure. To compute the impact of several proposed changes efficiently, we assume that all probabilistic state transitions and trajectories for the observed sequences remain the same as before except in the changed parts of the structure. We call this “locality of change” assumption, which allows us to add the corresponding expected counts from the states being merged as shown below. DISPLAYFORM0 ",
"",
" ",
"The second kind of structure change we consider is edge deletion and consists of removing a transition between two states and redistributing its evidence along the other paths between the same states. Again, making the locality of change assumption, we only recompute the parameters of the transition and observation distributions that occur in the paths between the two states. We re-estimate the parameters due to deleting an edge INLINEFORM0 , by effectively redistributing the expected transitions from INLINEFORM1 to INLINEFORM2 , INLINEFORM3 , among other edges between INLINEFORM4 and INLINEFORM5 based on the parameters of the current model.",
"This is done efficiently using a procedure similar to the Forward-Backward algorithm under the null observation sequence. Algorithm SECREF10 takes the current model INLINEFORM0 , an edge ( INLINEFORM1 ), and the expected count of the number of transitions from INLINEFORM2 to INLINEFORM3 , INLINEFORM4 , as inputs. It updates the counts of the other transitions to compensate for removing the edge between INLINEFORM5 and INLINEFORM6 . It initializes the INLINEFORM7 of INLINEFORM8 and the INLINEFORM9 of INLINEFORM10 with 1 and the rest of the INLINEFORM11 s and INLINEFORM12 s to 0. It makes two passes through the HMM, first in the topological order of the nodes in the graph and the second in the reverse topological order. In the first, “forward” pass from INLINEFORM13 to INLINEFORM14 , it calculates the INLINEFORM15 value of each node INLINEFORM16 that represents the probability that a sequence that passes through INLINEFORM17 also passes through INLINEFORM18 while emitting no observation. In the second, “backward” pass, it computes the INLINEFORM19 value of a node INLINEFORM20 that represents the probability that a sequence that passes through INLINEFORM21 emits no observation and later passes through INLINEFORM22 . The product of INLINEFORM23 and INLINEFORM24 gives the probability that INLINEFORM25 is passed through when going from INLINEFORM26 to INLINEFORM27 and emits no observation. Multiplying it by the expected number of transitions INLINEFORM28 gives the expected number of additional counts which are added to INLINEFORM29 to compensate for the deleted transition INLINEFORM30 . After the distribution of the evidence, all the transition and observation probabilities are re-estimated for the nodes and edges affected by the edge deletion",
"DeleteEdgeModel INLINEFORM0 , edge INLINEFORM1 , count INLINEFORM2 ",
" INLINEFORM0 INLINEFORM1 INLINEFORM2 to INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 ",
" INLINEFORM0 downto INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 = INLINEFORM5 INLINEFORM6 = INLINEFORM7 INLINEFORM8 ",
"Forward-Backward algorithm to delete an edge and re-distribute the expected counts. ",
"In principle, one could continue making incremental structural changes and parameter updates and never run EM again. This is exactly what is done in Bayesian Model Merging (BMM) BIBREF9 . However, a series of structural changes followed by approximate incremental parameter updates could lead to bad local optima. Hence, after merging each batch of INLINEFORM0 sequences into the HMM, we re-run EM for parameter estimation on all sequences seen thus far."
],
[
"",
"We now describe how we score the structures produced by our algorithm to select the best structure. We employ a Bayesian scoring function, which is the posterior probability of the model given the data, denoted INLINEFORM0 . The score is decomposed via Bayes Rule (i.e., INLINEFORM1 ), and the denominator is omitted since it is invariant with regards to the model.",
"Since each observation sequence is independent of the others, the data likelihood INLINEFORM0 is calculated using the Forward-Backward algorithm and Equation EQREF7 in Section SECREF4 . Because the initial model fully enumerates the data, any merge can only reduce the data likelihood. Hence, the model prior INLINEFORM1 must be designed to encourage generalization via state merges and edge deletions (described in Section SECREF10 ). We employed a prior with three components: the first two components are syntactic and penalize the number of states INLINEFORM2 and the number of non-zero transitions INLINEFORM3 respectively. The third component penalizes the number of frequently-observed semantic constraint violations INLINEFORM4 . In particular, the prior probabilty of the model INLINEFORM5 . The INLINEFORM6 parameters assign weights to each component in the prior.",
"The semantic constraints are learned from the event sequences for use in the model prior. The constraints take the simple form “ INLINEFORM0 never follows INLINEFORM1 .” They are learned by generating all possible such rules using pairwise permutations of event types, and evaluating them on the training data. In particular, the number of times each rule is violated is counted and a INLINEFORM2 -test is performed to determine if the violation rate is lower than a predetermined error rate. Those rules that pass the hypothesis test with a threshold of INLINEFORM3 are included. When evaluating a model, these contraints are considered violated if the model could generate a sequence of observations that violates the constraint.",
"Also, in addition to incrementally computing the transition and observation counts, INLINEFORM0 and INLINEFORM1 , the likelihood, INLINEFORM2 can be incrementally updated with structure changes as well. Note that the likelihood can be expressed as INLINEFORM3 when the state transitions are observed. Since the state transitions are not actually observed, we approximate the above expression by replacing the observed counts with expected counts. Further, the locality of change assumption allows us to easily calculate the effect of changed expected counts and parameters on the likelihood by dividing it by the old products and multiplying by the new products. We call this version of our algorithm SEM-HMM-Approx."
],
[
"",
"We now present our experimental results on SEM-HMM and SEM-HMM-Approx. The evaluation task is to predict missing events from an observed sequence of events. For comparison, four baselines were also evaluated. The “Frequency” baseline predicts the most frequent event in the training set that is not found in the observed test sequence. The “Conditional” baseline predicts the next event based on what most frequently follows the prior event. A third baseline, referred to as “BMM,” is a version of our algorithm that does not use EM for parameter estimation and instead only incrementally updates the parameters starting from the raw document counts. Further, it learns a standard HMM, that is, with no INLINEFORM0 transitions. This is very similar to the Bayesian Model Merging approach for HMMs BIBREF9 . The fourth baseline is the same as above, but uses our EM algorithm for parameter estimation without INLINEFORM1 transitions. It is referred to as “BMM + EM.”",
"The Open Minds Indoor Common Sense (OMICS) corpus was developed by the Honda Research Institute and is based upon the Open Mind Common Sense project BIBREF17 . It describes 175 common household tasks with each task having 14 to 122 narratives describing, in short sentences, the necessary steps to complete it. Each narrative consists of temporally ordered, simple sentences from a single author that describe a plan to accomplish a task. Examples from the “Answer the Doorbell” task can be found in Table 2. The OMICS corpus has 9044 individual narratives and its short and relatively consistent language lends itself to relatively easy event extraction.",
"The 84 domains with at least 50 narratives and 3 event types were used for evaluation. For each domain, forty percent of the narratives were withheld for testing, each with one randomly-chosen event omitted. The model was evaluated on the proportion of correctly predicted events given the remaining sequence. On average each domain has 21.7 event types with a standard deviation of 4.6. Further, the average narrative length across domains is 3.8 with standard deviation of 1.7. This implies that only a frcation of the event types are present in any given narrative. There is a high degree of omission of events and many different ways of accomplishing each task. Hence, the prediction task is reasonably difficult, as evidenced by the simple baselines. Neither the frequency of events nor simple temporal structure is enough to accurately fill in the gaps which indicates that most sophisticated modeling such as SEM-HMM is needed.",
"The average accuracy across the 84 domains for each method is found in Table 1. On average our method significantly out-performed all the baselines, with the average improvement in accuracy across OMICS tasks between SEM-HMM and each baseline being statistically significant at a .01 level across all pairs and on sizes of INLINEFORM0 and INLINEFORM1 using one-sided paired t-tests. For INLINEFORM2 improvement was not statistically greater than zero. We see that the results improve with batch size INLINEFORM3 until INLINEFORM4 for SEM-HMM and BMM+EM, but they decrease with batch size for BMM without EM. Both of the methods which use EM depend on statistics to be robust and hence need a larger INLINEFORM5 value to be accurate. However for BMM, a smaller INLINEFORM6 size means it reconciles a couple of documents with the current model in each iteration which ultimately helps guide the structure search. The accuracy for “SEM-HMM Approx.” is close to the exact version at each batch level, while only taking half the time on average."
],
[
"",
"In this paper, we have given the first formal treatment of scripts as HMMs with missing observations. We adapted the HMM inference and parameter estimation procedures to scripts and developed a new structure learning algorithm, SEM-HMM, based on the EM procedure. It improves upon BMM by allowing for INLINEFORM0 transitions and by incorporating maximum likelihood parameter estimation via EM. We showed that our algorithm is effective in learning scripts from documents and performs better than other baselines on sequence prediction tasks. Thanks to the assumption of missing observations, the graphical structure of the scripts is usually sparse and intuitive. Future work includes learning from more natural text such as newspaper articles, enriching the representations to include objects and relations, and integrating HMM inference into text understanding."
],
[
"We would like to thank Nate Chambers, Frank Ferraro, and Ben Van Durme for their helpful comments, criticism, and feedback. Also we would like to thank the SCALE 2013 workshop. This work was supported by the DARPA and AFRL under contract No. FA8750-13-2-0033. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA, the AFRL, or the US government."
]
],
"section_name": [
"Introduction",
"Problem Setup",
"HMM-Script Learning",
"Event Extraction",
"Parameter Estimation with EM",
"Structure Learning",
"Structure Scoring",
"Experiments and Results",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0ccf646f3d1845548025550a8c99b943dc143e6c",
"e47edba37a36b48b3672bc674c67e3d7a4a1a437"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: The average accuracy on the OMICS domains"
],
"extractive_spans": [],
"free_form_answer": "On r=2 SEM-HMM Approx. is 2.2% better, on r=5 SEM-HMM is 3.9% better and on r=10 SEM-HMM is 3.9% better than the best baseline",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The average accuracy on the OMICS domains"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The average accuracy across the 84 domains for each method is found in Table 1. On average our method significantly out-performed all the baselines, with the average improvement in accuracy across OMICS tasks between SEM-HMM and each baseline being statistically significant at a .01 level across all pairs and on sizes of INLINEFORM0 and INLINEFORM1 using one-sided paired t-tests. For INLINEFORM2 improvement was not statistically greater than zero. We see that the results improve with batch size INLINEFORM3 until INLINEFORM4 for SEM-HMM and BMM+EM, but they decrease with batch size for BMM without EM. Both of the methods which use EM depend on statistics to be robust and hence need a larger INLINEFORM5 value to be accurate. However for BMM, a smaller INLINEFORM6 size means it reconciles a couple of documents with the current model in each iteration which ultimately helps guide the structure search. The accuracy for “SEM-HMM Approx.” is close to the exact version at each batch level, while only taking half the time on average."
],
"extractive_spans": [
"On average our method significantly out-performed all the baselines, with the average improvement in accuracy across OMICS tasks between SEM-HMM and each baseline being statistically significant at a .01 level across all pairs and on sizes of INLINEFORM0 and INLINEFORM1 using one-sided paired t-tests."
],
"free_form_answer": "",
"highlighted_evidence": [
"The average accuracy across the 84 domains for each method is found in Table 1. On average our method significantly out-performed all the baselines, with the average improvement in accuracy across OMICS tasks between SEM-HMM and each baseline being statistically significant at a .01 level across all pairs and on sizes of INLINEFORM0 and INLINEFORM1 using one-sided paired t-tests. For INLINEFORM2 improvement was not statistically greater than zero. We see that the results improve with batch size INLINEFORM3 until INLINEFORM4 for SEM-HMM and BMM+EM, but they decrease with batch size for BMM without EM. Both of the methods which use EM depend on statistics to be robust and hence need a larger INLINEFORM5 value to be accurate. However for BMM, a smaller INLINEFORM6 size means it reconciles a couple of documents with the current model in each iteration which ultimately helps guide the structure search. The accuracy for “SEM-HMM Approx.” is close to the exact version at each batch level, while only taking half the time on average."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1d62408abcf137f160506c8a2b067e8cdf7c0d8d",
"2e9317488d3fe4cf90b8741630527d0d8ad18aec"
],
"answer": [
{
"evidence": [
"We now present our experimental results on SEM-HMM and SEM-HMM-Approx. The evaluation task is to predict missing events from an observed sequence of events. For comparison, four baselines were also evaluated. The “Frequency” baseline predicts the most frequent event in the training set that is not found in the observed test sequence. The “Conditional” baseline predicts the next event based on what most frequently follows the prior event. A third baseline, referred to as “BMM,” is a version of our algorithm that does not use EM for parameter estimation and instead only incrementally updates the parameters starting from the raw document counts. Further, it learns a standard HMM, that is, with no INLINEFORM0 transitions. This is very similar to the Bayesian Model Merging approach for HMMs BIBREF9 . The fourth baseline is the same as above, but uses our EM algorithm for parameter estimation without INLINEFORM1 transitions. It is referred to as “BMM + EM.”"
],
"extractive_spans": [],
"free_form_answer": "The \"frequency\" baseline, the \"conditional\" baseline, the \"BMM\" baseline and the \"BMM+EM\" baseline",
"highlighted_evidence": [
"The evaluation task is to predict missing events from an observed sequence of events. For comparison, four baselines were also evaluated. The “Frequency” baseline predicts the most frequent event in the training set that is not found in the observed test sequence. The “Conditional” baseline predicts the next event based on what most frequently follows the prior event. A third baseline, referred to as “BMM,” is a version of our algorithm that does not use EM for parameter estimation and instead only incrementally updates the parameters starting from the raw document counts. Further, it learns a standard HMM, that is, with no INLINEFORM0 transitions. This is very similar to the Bayesian Model Merging approach for HMMs BIBREF9 . The fourth baseline is the same as above, but uses our EM algorithm for parameter estimation without INLINEFORM1 transitions. It is referred to as “BMM + EM.”"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We now present our experimental results on SEM-HMM and SEM-HMM-Approx. The evaluation task is to predict missing events from an observed sequence of events. For comparison, four baselines were also evaluated. The “Frequency” baseline predicts the most frequent event in the training set that is not found in the observed test sequence. The “Conditional” baseline predicts the next event based on what most frequently follows the prior event. A third baseline, referred to as “BMM,” is a version of our algorithm that does not use EM for parameter estimation and instead only incrementally updates the parameters starting from the raw document counts. Further, it learns a standard HMM, that is, with no INLINEFORM0 transitions. This is very similar to the Bayesian Model Merging approach for HMMs BIBREF9 . The fourth baseline is the same as above, but uses our EM algorithm for parameter estimation without INLINEFORM1 transitions. It is referred to as “BMM + EM.”"
],
"extractive_spans": [
"“Frequency” baseline",
"“Conditional” baseline",
"BMM",
"BMM + EM"
],
"free_form_answer": "",
"highlighted_evidence": [
"For comparison, four baselines were also evaluated. The “Frequency” baseline predicts the most frequent event in the training set that is not found in the observed test sequence. The “Conditional” baseline predicts the next event based on what most frequently follows the prior event. A third baseline, referred to as “BMM,” is a version of our algorithm that does not use EM for parameter estimation and instead only incrementally updates the parameters starting from the raw document counts.",
"The fourth baseline is the same as above, but uses our EM algorithm for parameter estimation without INLINEFORM1 transitions. It is referred to as “BMM + EM.”"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0124c0fea47836584807bbfd177d76ef6a3c03e1",
"a79b60b4e162b479db43ef33af198d1fc2784a3f"
],
"answer": [
{
"evidence": [
"The Open Minds Indoor Common Sense (OMICS) corpus was developed by the Honda Research Institute and is based upon the Open Mind Common Sense project BIBREF17 . It describes 175 common household tasks with each task having 14 to 122 narratives describing, in short sentences, the necessary steps to complete it. Each narrative consists of temporally ordered, simple sentences from a single author that describe a plan to accomplish a task. Examples from the “Answer the Doorbell” task can be found in Table 2. The OMICS corpus has 9044 individual narratives and its short and relatively consistent language lends itself to relatively easy event extraction."
],
"extractive_spans": [
"The Open Minds Indoor Common Sense (OMICS) corpus "
],
"free_form_answer": "",
"highlighted_evidence": [
"The Open Minds Indoor Common Sense (OMICS) corpus was developed by the Honda Research Institute and is based upon the Open Mind Common Sense project BIBREF17 . It describes 175 common household tasks with each task having 14 to 122 narratives describing, in short sentences, the necessary steps to complete it. Each narrative consists of temporally ordered, simple sentences from a single author that describe a plan to accomplish a task. Examples from the “Answer the Doorbell” task can be found in Table 2. The OMICS corpus has 9044 individual narratives and its short and relatively consistent language lends itself to relatively easy event extraction."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Open Minds Indoor Common Sense (OMICS) corpus was developed by the Honda Research Institute and is based upon the Open Mind Common Sense project BIBREF17 . It describes 175 common household tasks with each task having 14 to 122 narratives describing, in short sentences, the necessary steps to complete it. Each narrative consists of temporally ordered, simple sentences from a single author that describe a plan to accomplish a task. Examples from the “Answer the Doorbell” task can be found in Table 2. The OMICS corpus has 9044 individual narratives and its short and relatively consistent language lends itself to relatively easy event extraction."
],
"extractive_spans": [
"Open Minds Indoor Common Sense (OMICS) corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Open Minds Indoor Common Sense (OMICS) corpus was developed by the Honda Research Institute and is based upon the Open Mind Common Sense project BIBREF17 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"By how much do they outperform baselines?",
"Which baselines do they use?",
"Which datasets do they evaluate on?"
],
"question_id": [
"ff2b58c90784eda6dddd8a92028e6432442c1093",
"5e4eac0b0a73d465d74568c21819acaec557b700",
"bc6ad5964f444cf414b661a4b942dafb7640c564"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A portion of a learned “Answer the Doorbell” script",
"Table 1: The average accuracy on the OMICS domains",
"Table 2: Examples from the OMICS “Answer the Doorbell” task with event triggers underlined"
],
"file": [
"2-Figure1-1.png",
"6-Table1-1.png",
"6-Table2-1.png"
]
} | [
"By how much do they outperform baselines?",
"Which baselines do they use?"
] | [
[
"1809.03680-Experiments and Results-4",
"1809.03680-6-Table1-1.png"
],
[
"1809.03680-Experiments and Results-1"
]
] | [
"On r=2 SEM-HMM Approx. is 2.2% better, on r=5 SEM-HMM is 3.9% better and on r=10 SEM-HMM is 3.9% better than the best baseline",
"The \"frequency\" baseline, the \"conditional\" baseline, the \"BMM\" baseline and the \"BMM+EM\" baseline"
] | 213 |
1609.02075 | The Social Dynamics of Language Change in Online Networks | Language change is a complex social phenomenon, revealing pathways of communication and sociocultural influence. But, while language change has long been a topic of study in sociolinguistics, traditional linguistic research methods rely on circumstantial evidence, estimating the direction of change from differences between older and younger speakers. In this paper, we use a data set of several million Twitter users to track language changes in progress. First, we show that language change can be viewed as a form of social influence: we observe complex contagion for phonetic spellings and"netspeak"abbreviations (e.g., lol), but not for older dialect markers from spoken language. Next, we test whether specific types of social network connections are more influential than others, using a parametric Hawkes process model. We find that tie strength plays an important role: densely embedded social ties are significantly better conduits of linguistic influence. Geographic locality appears to play a more limited role: we find relatively little evidence to support the hypothesis that individuals are more influenced by geographically local social ties, even in their usage of geographical dialect markers. | {
"paragraphs": [
[
"Change is a universal property of language. For example, English has changed so much that Renaissance-era texts like The Canterbury Tales must now be read in translation. Even contemporary American English continues to change and diversify at a rapid pace—to such an extent that some geographical dialect differences pose serious challenges for comprehensibility BIBREF0 . Understanding language change is therefore crucial to understanding language itself, and has implications for the design of more robust natural language processing systems BIBREF1 .",
"Language change is a fundamentally social phenomenon BIBREF2 . For a new linguistic form to succeed, at least two things must happen: first, speakers (and writers) must come into contact with the new form; second, they must decide to use it. The first condition implies that language change is related to the structure of social networks. If a significant number of speakers are isolated from a potential change, then they are unlikely to adopt it BIBREF3 . But mere exposure is not sufficient—we are all exposed to language varieties that are different from our own, yet we nonetheless do not adopt them in our own speech and writing. For example, in the United States, many African American speakers maintain a distinct dialect, despite being immersed in a linguistic environment that differs in many important respects BIBREF4 , BIBREF5 . Researchers have made a similar argument for socioeconomic language differences in Britain BIBREF6 . In at least some cases, these differences reflect questions of identity: because language is a key constituent in the social construction of group identity, individuals must make strategic choices when deciding whether to adopt new linguistic forms BIBREF7 , BIBREF8 , BIBREF9 . By analyzing patterns of language change, we can learn more about the latent structure of social organization: to whom people talk, and how they see themselves.",
"But, while the basic outline of the interaction between language change and social structure is understood, the fine details are still missing: What types of social network connections are most important for language change? To what extent do considerations of identity affect linguistic differences, particularly in an online context? Traditional sociolinguistic approaches lack the data and the methods for asking such detailed questions about language variation and change.",
"In this paper, we show that large-scale social media data can shed new light on how language changes propagate through social networks. We use a data set of Twitter users that contains all public messages for several million accounts, augmented with social network and geolocation metadata. This data set makes it possible to track, and potentially explain, every usage of a linguistic variable as it spreads through social media. Overall, we make the following contributions:"
],
[
"Twitter is an online social networking platform. Users post 140-character messages, which appear in their followers' timelines. Because follower ties can be asymmetric, Twitter serves multiple purposes: celebrities share messages with millions of followers, while lower-degree users treat Twitter as a more intimate social network for mutual communication BIBREF13 . In this paper, we use a large-scale Twitter data set, acquired via an agreement between Microsoft and Twitter. This data set contains all public messages posted between June 2013 and June 2014 by several million users, augmented with social network and geolocation metadata. We excluded retweets, which are explicitly marked with metadata, and focused on messages that were posted in English from within the United States."
],
[
"The explosive rise in popularity of social media has led to an increase in linguistic diversity and creativity BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF1 , BIBREF18 , affecting written language at all levels, from spelling BIBREF19 all the way up to grammatical structure BIBREF20 and semantic meaning across the lexicon BIBREF21 , BIBREF22 . Here, we focus on the most easily observable and measurable level: variation and change in the use of individual words.",
"We take as our starting point words that are especially characteristic of eight cities in the United States. We chose these cities to represent a wide range of geographical regions, population densities, and demographics. We identified the following words as geographically distinctive markers of their associated cities, using SAGE BIBREF23 . Specifically, we followed the approach previously used by Eisenstein to identify community-specific terms in textual corpora BIBREF24 .",
"ain (phonetic spelling of ain't), dese (phonetic spelling of these), yeen (phonetic spelling of you ain't);",
"ard (phonetic spelling of alright), inna (phonetic spelling of in a and in the), lls (laughing like shit), phony (fake);",
"cookout;",
"asl (phonetic spelling of as hell, typically used as an intensifier on Twitter), mfs (motherfuckers);",
"graffiti, tfti (thanks for the information);",
"ard (phonetic spelling of alright), ctfuu (expressive lengthening of ctfu, an abbreviation of cracking the fuck up), jawn (generic noun);",
"hella (an intensifier);",
"inna (phonetic spelling of in a and in the), lls (laughing like shit), stamp (an exclamation indicating emphasis).",
"Linguistically, we can divide these words into three main classes:",
"The origins of cookout, graffiti, hella, phony, and stamp can almost certainly be traced back to spoken language. Some of these words (e.g., cookout and graffiti) are known to all fluent English speakers, but are preferred in certain cities simply as a matter of topic. Other words (e.g., hella BIBREF25 and jawn BIBREF26 ) are dialect markers that are not widely used outside their regions of origin, even after several decades of use in spoken language.",
"ain, ard, asl, inna, and yeen are non-standard spellings that are based on phonetic variation by region, demographics, or situation.",
"ctfuu, lls, mfs, and tfti are phrasal abbreviations. These words are interesting because they are fundamentally textual. They are unlikely to have come from spoken language, and are intrinsic to written social media.",
"Several of these words were undergoing widespread growth in popularity around the time period spanned by our data set. For example, the frequencies of ard, asl, hella, and tfti more than tripled between 2012 and 2013. Our main research question is whether and how these words spread through Twitter. For example, lexical words are mainly transmitted through speech. We would expect their spread to be only weakly correlated with the Twitter social network. In contrast, abbreviations are fundamentally textual in nature, so we would expect their spread to correlate much more closely with the Twitter social network."
],
[
"To focus on communication between peers, we constructed a social network of mutual replies between Twitter users. Specifically, we created a graph in which there is a node for each user in the data set. We then placed an undirected edge between a pair of users if each replied to the other by beginning a message with their username. Our decision to use the reply network (rather than the follower network) was a pragmatic choice: the follower network is not widely available. However, the reply network is also well supported by previous research. For example, Huberman et al. argue that Twitter's mention network is more socially meaningful than its follower network: although users may follow thousands of accounts, they interact with a much more limited set of users BIBREF27 , bounded by a constant known as Dunbar's number BIBREF28 . Finally, we restricted our focus to mutual replies because there are a large number of unrequited replies directed at celebrities. These replies do not indicate a meaningful social connection.",
"We compared our mutual-reply network with two one-directional “in” and “out” networks, in which all public replies are represented by directed edges. The degree distributions of these networks are depicted in fig:degree-dist. As expected, there are a few celebrities with very high in-degrees, and a maximum in-degree of $20,345$ . In contrast, the maximum degree in our mutual-reply network is 248."
],
[
"In order to test whether geographically local social ties are a significant conduit of linguistic influence, we obtained geolocation metadata from Twitter's location field. This field is populated via a combination of self reports and GPS tagging. We aggregated metadata across each user's messages, so that each user was geolocated to the city from which they most commonly post messages. Overall, our data set contains 4.35 million geolocated users, of which 589,562 were geolocated to one of the eight cities listed in sec:data-language. We also included the remaining users in our data set, but were not able to account for their geographical location.",
"Researchers have previously shown that social network connections in online social media tend to be geographically assortative BIBREF29 , BIBREF30 . Our data set is consistent with this finding: for 94.8% of mutual-reply dyads in which both users were geolocated to one of the eight cities listed in sec:data-language, they were both geolocated to the same city. This assortativity motivates our decision to estimate separate influence parameters for local and non-local social connections (see sec:parametric-hawkes)."
],
[
"Our main research goal is to test whether and how geographically distinctive linguistic markers spread through Twitter. With this goal in mind, our first question is whether the adoption of these markers can be viewed as a form of complex contagion. To answer this question, we computed the fraction of users who used one of the words listed in sec:data-language after being exposed to that word by one of their social network connections. Formally, we say that user $i$ exposed user $j$ to word $w$ at time $t$ if and only if the following conditions hold: $i$ used $w$ at time $t$ ; $j$ had not used $w$ before time $t$ ; the social network connection $j$0 was formed before time $j$1 . We define the infection risk for word $j$2 to be the number of users who use word $j$3 after being exposed divided by the total number of users who were exposed. To consider the possibility that multiple exposures have a greater impact on the infection risk, we computed the infection risk after exposures across one, two, and three or more distinct social network connections.",
"The words' infection risks cannot be interpreted directly because relational autocorrelation can also be explained by homophily and external confounds. For example, geographically distinctive non-standard language is more likely to be used by young people BIBREF31 , and online social network connections are assortative by age BIBREF32 . Thus, a high infection risk can also be explained by the confound of age. We therefore used the shuffle test proposed by Anagnostopoulos et al. BIBREF33 , which compares the observed infection risks to infection risks under the null hypothesis that event timestamps are independent. The null hypothesis infection risks are computed by randomly permuting the order of word usage events. If the observed infection risks are substantially higher than the infection risks computed using the permuted data, then this is compatible with social influence.",
"fig:risk-by-exposure depicts the ratios between the words' observed infection risks and the words' infection risks under the null hypothesis, after exposures across one, two, and three or more distinct connections. We computed 95% confidence intervals across the words and across the permutations used in the shuffle test. For all three linguistic classes defined in sec:data-language, the risk ratio for even a single exposure is significantly greater than one, suggesting the existence of social influence. The risk ratio for a single exposure is nearly identical across the three classes. For phonetic spellings and abbreviations, the risk ratio grows with the number of exposures. This pattern suggests that words in these classes exhibit complex contagion—i.e., multiple exposures increase the likelihood of adoption BIBREF35 . In contrast, the risk ratio for lexical words remains the same as the number of exposures increases, suggesting that these words spread by simple contagion.",
"Complex contagion has been linked to a range of behaviors, from participation in collective political action to adoption of avant garde fashion BIBREF35 . A common theme among these behaviors is that they are not cost-free, particularly if the behavior is not legitimated by widespread adoption. In the case of linguistic markers intrinsic to social media, such as phonetic spellings and abbreviations, adopters risk negative social evaluations of their linguistic competency, as well as their cultural authenticity BIBREF36 . In contrast, lexical words are already well known from spoken language and are thus less socially risky. This difference may explain why we do not observe complex contagion for lexical words."
],
[
"In the previous section, we showed that geographically distinctive linguistic markers spread through Twitter, with evidence of complex contagion for phonetic spellings and abbreviations. But, does each social network connection contribute equally? Our second question is therefore whether (1) strong ties and (2) geographically local ties exert greater linguistic influence than other ties. If so, users must socially evaluate the information they receive from these connections, and judge it to be meaningful to their linguistic self-presentation. In this section, we outline two hypotheses regarding their relationships to linguistic influence."
],
[
"Social networks are often characterized in terms of strong and weak ties BIBREF37 , BIBREF3 , with strong ties representing more important social relationships. Strong ties are often densely embedded, meaning that the nodes in question share many mutual friends; in contrast, weak ties often bridge disconnected communities. Bakshy et al. investigated the role of weak ties in information diffusion, through resharing of URLs on Facebook BIBREF38 . They found that URLs shared across strong ties are more likely to be reshared. However, they also found that weak ties play an important role, because users tend to have more weak ties than strong ties, and because weak ties are more likely to be a source of new information. In some respects, language change is similar to traditional information diffusion scenarios, such as resharing of URLs. But, in contrast, language connects with personal identity on a much deeper level than a typical URL. As a result, strong, deeply embedded ties may play a greater role in enforcing community norms.",
"We quantify tie strength in terms of embeddedness. Specifically, we use the normalized mutual friends metric introduced by Adamic and Adar BIBREF39 : ",
"$$s_{i,j} = \\sum _{k \\in \\Gamma (i) \\cap \\Gamma (j)} \\frac{1}{\\log \\left(\n\\#| \\Gamma (k)|\\right)},$$ (Eq. 28) ",
"where, in our setting, $\\Gamma (i)$ is the set of users connected to $i$ in the Twitter mutual-reply network and $\\#|\\Gamma (i)|$ is the size of this set. This metric rewards dyads for having many mutual friends, but counts mutual friends more if their degrees are low—a high-degree mutual friend is less informative than one with a lower-degree. Given this definition, we can form the following hypothesis:",
"The linguistic influence exerted across ties with a high embeddedness value $s_{i,j}$ will be greater than the linguistic influence exerted across other ties."
],
[
"An open question in sociolinguistics is whether and how local covert prestige—i.e., the positive social evaluation of non-standard dialects—affects the adoption of new linguistic forms BIBREF6 . Speakers often explain their linguistic choices in terms of their relationship with their local identity BIBREF40 , but this may be a post-hoc rationalization made by people whose language is affected by factors beyond their control. Indeed, some sociolinguists have cast doubt on the role of “local games” in affecting the direction of language change BIBREF41 .",
"The theory of covert prestige suggests that geographically local social ties are more influential than non-local ties. We do not know of any prior attempts to test this hypothesis quantitatively. Although researchers have shown that local linguistic forms are more likely to be used in messages that address geographically local friends BIBREF42 , they have not attempted to measure the impact of exposure to these forms. This lack of prior work may be because it is difficult to obtain relevant data, and to make reliable inferences from such data. For example, there are several possible explanations for the observation that people often use similar language to that of their geographical neighbors. One is exposure: even online social ties tend to be geographically assortative BIBREF32 , so most people are likely to be exposed to local linguistic forms through local ties. Alternatively, the causal relation may run in the reverse direction, with individuals preferring to form social ties with people whose language matches their own. In the next section, we describe a model that enables us to tease apart the roles of geographic assortativity and local influence, allowing us to test the following hypothesis:",
"The influence toward geographically distinctive linguistic markers is greater when exerted across geographically local ties than across other ties.",
"We note that this hypothesis is restricted in scope to geographically distinctive words. We do not consider the more general hypothesis that geographically local ties are more influential for all types of language change, such as change involving linguistic variables that are associated with gender or socioeconomic status."
],
[
"To test our hypotheses about social evaluation, we require a more sophisticated modeling tool than the simple counting method described in sec:influence. In this section, rather than asking whether a user was previously exposed to a word, we ask by whom, in order to compare the impact of exposures across different types of social network connections. We also consider temporal properties. For example, if a user adopts a new word, should we credit this to an exposure from a weak tie in the past hour, or to an exposure from a strong tie in the past day?",
"Following a probabilistic modeling approach, we treated our Twitter data set as a set of cascades of timestamped events, with one cascade for each of the geographically distinctive words described in sec:data-language. Each event in a word's cascade corresponds to a tweet containing that word. We modeled each cascade as a probabilistic process, and estimated the parameters of this process. By comparing nested models that make progressively finer distinctions between social network connections, we were able to quantitatively test our hypotheses.",
"Our modeling framework is based on a Hawkes process BIBREF11 —a specialization of an inhomogeneous Poisson process—which explains a cascade of timestamped events in terms of influence parameters. In a temporal setting, an inhomogeneous Poisson process says that the number of events $y_{t_1,t_2}$ between $t_1$ and $t_2$ is drawn from a Poisson distribution, whose parameter is the area under a time-varying intensity function over the interval defined by $t_1$ and $t_2$ : ",
"$$y_{t_1,t_2} &\\sim \\text{Poisson}\\left(\\Lambda (t_1,t_2)\\right))\n\\multicolumn{2}{l}{\\text{where}}\\\\\n\\Lambda (t_1,t_2) &= \\int _{t_1}^{t_2} \\lambda (t)\\ \\textrm {d}t.$$ (Eq. 32) ",
" Since the parameter of a Poisson distribution must be non-negative, the intensity function must be constrained to be non-negative for all possible values of $t$ .",
"A Hawkes process is a self-exciting inhomogeneous Poisson process, where the intensity function depends on previous events. If we have a cascade of $N$ events $\\lbrace t_n\\rbrace _{n=1}^N$ , where $t_n$ is the timestamp of event $n$ , then the intensity function is ",
"$$\\lambda (t) = \\mu _t + \\sum _{t_n < t} \\alpha \\, \\kappa (t - t_n),$$ (Eq. 33) ",
"where $\\mu _t$ is the base intensity at time $t$ , $\\alpha $ is an influence parameter that captures the influence of previous events, and $\\kappa (\\cdot )$ is a time-decay kernel.",
"We can extend this framework to vector observations $y_{t_1,t_2} = (y^{(1)}_{t_1, t_2}, \\ldots , y^{(M)}_{t_1,\nt_2})$ and intensity functions $\\lambda (t) =\n(\\lambda ^{(1)}(t), \\ldots , \\lambda ^{(M)}(t))$ , where, in our setting, $M$ is the total number of users in our data set. If we have a cascade of $N$ events $\\lbrace (t_n, m_n)\\rbrace _{n=1}^N$ , where $t_n$ is the timestamp of event $n$ and $m_n \\in \\lbrace 1, \\ldots , M\\rbrace $ is the source of event $n$ , then the intensity function for user $m^{\\prime } \\in \\lbrace 1, \\ldots ,\nM\\rbrace $ is ",
"$$\\lambda ^{(m^{\\prime })}(t) = \\mu ^{(m^{\\prime })}_t + \\sum _{t_n < t} \\alpha _{m_n \\rightarrow m^{\\prime }} \\kappa (t - t_n),$$ (Eq. 34) ",
"where $\\mu _t^{(m^{\\prime })}$ is the base intensity for user $m^{\\prime }$ at time $t$ , $\\alpha _{m_n \\rightarrow m^{\\prime }}$ is a pairwise influence parameter that captures the influence of user $m_n$ on user $m^{\\prime }$ , and $\\kappa (\\cdot )$ is a time-decay kernel. Throughout our experiments, we used an exponential decay kernel $\\kappa (\\Delta t) = e^{-\\gamma \\Delta t}$ . We set the hyperparameter $\\gamma $ so that $\\kappa (\\textrm {1 hour}) = e^{-1}$ .",
"Researchers usually estimate all $M^2$ influence parameters of a Hawkes process (e.g., BIBREF43 , BIBREF44 ). However, in our setting, $M > 10^6$ , so there are $O(10^{12})$ influence parameters. Estimating this many parameters is computationally and statistically intractable, given that our data set includes only $O(10^5)$ events (see the $x$ -axis of fig:ll-diffs for event counts for each word). Moreover, directly estimating these parameters does not enable us to quantitatively test our hypotheses."
],
[
"Instead of directly estimating all $O(M^2)$ pairwise influence parameters, we used Li and Zha's parametric Hawkes process BIBREF12 . This model defines each pairwise influence parameter in terms of a linear combination of pairwise features: ",
"$$\\alpha _{m \\rightarrow m^{\\prime }} = \\theta ^{\\top } f(m \\rightarrow m^{\\prime }),$$ (Eq. 36) ",
"where $f(m \\rightarrow m^{\\prime })$ is a vector of features that describe the relationship between users $m$ and $m^{\\prime }$ . Thus, we only need to estimate the feature weights $\\theta $ and the base intensities. To ensure that the intensity functions $\\lambda ^{(1)}(t),\n\\ldots , \\lambda ^{(M)}(t)$ are non-negative, we must assume that $\\theta $ and the base intensities are non-negative.",
"We chose a set of four binary features that would enable us to test our hypotheses about the roles of different types of social network connections:",
"This feature fires when $m^{\\prime } \\!=\\! m$ . We included this feature to capture the scenario where using a word once makes a user more likely to use it again, perhaps because they are adopting a non-standard style.",
"This feature fires if the dyad $(m, m^{\\prime })$ is in the Twitter mutual-reply network described in sec:data-social. We also used this feature to define the remaining two features. By doing this, we ensured that features F2, F3, and F4 were (at least) as sparse as the mutual-reply network.",
"This feature fires if the dyad $(m,m^{\\prime })$ is in in the Twitter mutual-reply network, and the Adamic-Adar value for this dyad is especially high. Specifically, we require that the Adamic-Adar value be in the 90 $^{\\textrm {th}}$ percentile among all dyads where at least one user has used the word in question. Thus, this feature picks out the most densely embedded ties.",
"This feature fires if the dyad $(m,m^{\\prime })$ is in the Twitter mutual-reply network, and the users were geolocated to the same city, and that city is one of the eight cities listed in sec:data. For other dyads, this feature returns zero. Thus, this feature picks out a subset of the geographically local ties.",
"In sec:results, we describe how we used these features to construct a set of nested models that enabled us to test our hypotheses. In the remainder of this section, we provide the mathematical details of our parameter estimation method."
],
[
"We estimated the parameters using constrained maximum likelihood. Given a cascade of events $\\lbrace (t_n, m_n)\\rbrace _{n=1}^N$ , the log likelihood under our model is ",
"$$\\mathcal {L} = \\sum _{n=1}^N \\log \\lambda ^{(m_n)}(t_n) - \\sum _{m = 1}^M \\int _0^T \\lambda ^{(m)}(t)\\ \\textrm {d}t,$$ (Eq. 42) ",
"where $T$ is the temporal endpoint of the cascade. Substituting in the complete definition of the per-user intensity functions from eq:intensity and eq:alpha, ",
"$$\\mathcal {L} &= \\sum _{n=1}^N \\log {\\left(\\mu ^{(m_n)}_{t_n} + \\sum _{t_{n^{\\prime }} < t_n} \\theta ^{\\top }f(m_{n^{\\prime }} \\rightarrow m_n)\\,\\kappa (t_n - t_{n^{\\prime }}) \\right)} -{} \\\\\n&\\quad \\sum ^M_{m^{\\prime }=1} \\int _0^T \\left(\\mu _t^{(m^{\\prime })} + \\sum _{t_{n^{\\prime }} < t} \\theta ^{\\top } f(m_{n^{\\prime }} \\rightarrow m^{\\prime })\\, \\kappa (t - {t_{n^{\\prime }}})\\right)\\textrm {d}t.$$ (Eq. 43) ",
" If the base intensities are constant with respect to time, then ",
"$$\\mathcal {L} &= \\sum _{n=1}^N \\log {\\left(\\mu ^{(m_n)} + \\sum _{t_{n^{\\prime }} < t_n} \\theta ^{\\top }f(m_{n^{\\prime }} \\rightarrow m_n)\\, \\kappa (t_n - t_{n^{\\prime }}) \\right)} - {}\\\\\n&\\quad \\sum ^M_{m^{\\prime }=1} \\left( T\\mu ^{(m^{\\prime })} + \\sum ^N_{n=1} \\theta ^{\\top } f(m_n \\rightarrow m^{\\prime })\\,(1 - \\kappa (T - t_n))\\right),$$ (Eq. 44) ",
" where the second term includes a sum over all events $n = \\lbrace 1, \\ldots ,\nN\\rbrace $ that contibute to the final intensity $\\lambda ^{(m^{\\prime })}(T).$ To ease computation, however, we can rearrange the second term around the source $m$ rather than the recipient $m^{\\prime }$ : ",
"$$\\mathcal {L} &= \\sum _{n=1}^N \\log {\\left(\\mu ^{(m_n)} + \\sum _{t_{n^{\\prime }} < t_n} \\theta ^{\\top }f(m_{n^{\\prime }} \\rightarrow m_n)\\, \\kappa (t_n - t_{n^{\\prime }}) \\right)} - \\\\\n&\\quad \\sum _{m=1}^M \\left(T\\mu ^{(m)} + \\sum _{\\lbrace n : m_n = m\\rbrace } \\, \\theta ^{\\top } f(m \\rightarrow \\star )\\, (1 - \\kappa (T-t_n))\\right),$$ (Eq. 45) ",
" where we have introduced an aggregate feature vector $f(m\n\\rightarrow \\star ) = \\sum _{m^{\\prime }=1}^M f(m \\rightarrow m^{\\prime })$ . Because the sum $\\sum _{\\lbrace n : m_n = m^{\\prime }\\rbrace } f(m^{\\prime } \\rightarrow \\star )\\,\\kappa (T-t_n)$ does not involve either $\\theta $ or $\\mu ^{(1)}, \\ldots ,\n\\mu ^{(M)}$ , we can pre-compute it. Moreover, we need to do so only for users $m \\in \\lbrace 1, \\ldots , M\\rbrace $ for whom there is at least one event in the cascade.",
"A Hawkes process defined in terms of eq:intensity has a log likelihood that is convex in the pairwise influence parameters and the base intensities. For a parametric Hawkes process, $\\alpha _{m \\rightarrow m^{\\prime }}$ is an affine function of $\\theta $ , so, by composition, the log likelihood is convex in $\\theta $ and remains convex in the base intensities."
],
[
"The first term in the log likelihood and its gradient contains a nested sum over events, which appears to be quadratic in the number of events. However, we can use the exponential decay of the kernel $\\kappa (\\cdot )$ to approximate this term by setting a threshold $\\tau ^{\\star }$ such that $\\kappa (t_n - t_{n^{\\prime }}) = 0$ if $t_n - t_{n^{\\prime }}\n\\ge \\tau ^{\\star }$ . For example, if we set $\\tau ^{\\star } = 24 \\textrm {\nhours}$ , then we approximate $\\kappa (\\tau ^{\\star }) = 3 \\times 10^{-11} \\approx 0$ . This approximation makes the cost of computing the first term linear in the number of events.",
"The second term is linear in the number of social network connections and linear in the number of events. Again, we can use the exponential decay of the kernel $\\kappa (\\cdot )$ to approximate $\\kappa (T - t_n)\n\\approx 0$ for $T - t_n \\ge \\tau ^{\\star }$ , where $\\tau ^{\\star } = 24\n\\textrm { hours}$ . This approximation means that we only need to consider a small number of tweets near temporal endpoint of the cascade. For each user, we also pre-computed $\\sum _{\\lbrace n : m_n = m^{\\prime }\\rbrace }\nf(m^{\\prime } \\rightarrow \\star )\\,\\kappa (T - t_n)$ . Finally, both terms in the log likelihood and its gradient can also be trivially parallelized over users $m = \\lbrace 1, \\ldots , M\\rbrace $ .",
"For a Hawkes process defined in terms of eq:intensity, Ogata showed that additional speedups can be obtained by recursively pre-computing a set of aggregate messages for each dyad $(m,\nm^{\\prime })$ . Each message represents the events from user $m$ that may influence user $m^{\\prime }$ at the time $t_i^{(m^{\\prime })}$ of their $i^{\\textrm {th}}$ event BIBREF45 : $\n&R^{(i)}_{m \\rightarrow m^{\\prime }} \\\\\n&\\quad =\n{\\left\\lbrace \\begin{array}{ll}\n\\kappa (t^{(m^{\\prime })}_{i} - t^{(m^{\\prime })}_{i-1})\\,R^{(i-1)}_{m \\rightarrow m^{\\prime }} + \\sum _{t^{(m^{\\prime })}_{i-1} \\le t^{(m)}_{j} \\le t^{(m^{\\prime })}_i} \\kappa (t^{(m^{\\prime })}_i - t^{(m)}_j) & m\\ne m^{\\prime }\\\\\n\\kappa (t^{(m^{\\prime })}_{i} - t^{(m^{\\prime })}_{i-1}) \\times (1 + R^{(i-1)}_{m \\rightarrow m^{\\prime }}) & m = m^{\\prime }.\n\\end{array}\\right.}\n$ ",
" These aggregate messages do not involve the feature weights $\\theta $ or the base intensities, so they can be pre-computed and reused throughout parameter estimation.",
"For a parametric Hawkes process, it is not necessary to compute a set of aggregate messages for each dyad. It is sufficient to compute a set of aggregate messages for each possible configuration of the features. In our setting, there are only four binary features, and some combinations of features are impossible.",
"Because the words described in sec:data-language are relatively rare, most of the users in our data set never used them. However, it is important to include these users in the model. Because they did not adopt these words, despite being exposed to them by users who did, their presence exerts a negative gradient on the feature weights. Moreover, such users impose a minimal cost on parameter estimation because they need to be considered only when pre-computing feature counts."
],
[
"We optimized the log likelihood with respect to the feature weights $\\theta $ and the base intensities. Because the log likelihood decomposes over users, each base intensity $\\mu ^{(m)}$ is coupled with only the feature weights and not with the other base intensities. Jointly estimating all parameters is inefficient because it does not exploit this structure. We therefore used a coordinate ascent procedure, alternating between updating $\\theta $ and the base intensities. As explained in sec:parametric-hawkes, both $\\theta $ and the base intensities must be non-negative to ensure that intensity functions are also non-negative. At each stage of the coordinate ascent, we performed constrained optimization using the active set method of MATLAB's fmincon function."
],
[
"We used a separate set of parametric Hawkes process models for each of the geographically distinctive linguistic markers described in sec:data-language. Specifically, for each word, we constructed a set of nested models by first creating a baseline model using features F1 (self-activation) and F2 (mutual reply) and then adding in each of the experimental features—i.e., F3 (tie strength) and F4 (local).",
"We tested hypothesis H1 (strong ties are more influential) by comparing the goodness of fit for feature set F1+F2+F3 to that of feature set F1+F2. Similarly, we tested H2 (geographically local ties are more influential) by comparing the goodness of fit for feature set F1+F2+F4 to that of feature set F1+F2.",
"In fig:ll-diffs, we show the improvement in goodness of fit from adding in features F3 and F4. Under the null hypothesis, the log of the likelihood ratio follows a $\\chi ^2$ distribution with one degree of freedom, because the models differ by one parameter. Because we performed thirty-two hypothesis tests (sixteen words, two features), we needed to adjust the significance thresholds to correct for multiple comparisons. We did this using the Benjamini-Hochberg procedure BIBREF46 .",
"Features F3 and F4 did not improve the goodness of fit for less frequent words, such as ain, graffiti, and yeen, which occur fewer than $10^4$ times. Below this count threshold, there is not enough data to statistically distinguish between different types of social network connections. However, above this count threshold, adding in F3 (tie strength) yielded a statistically significant increase in goodness of fit for ard, asl, cookout, hella, jawn, mfs, and tfti. This finding provides evidence in favor of hypothesis H1—that the linguistic influence exerted across densely embedded ties is greater than the linguistic influence exerted across other ties.",
"In contrast, adding in F4 (local) only improved goodness of fit for three words: asl, jawn, and lls. We therefore conclude that support for hypothesis H2—that the linguistic influence exerted across geographically local ties is greater than the linguistic influence across than across other ties—is limited at best.",
"In sec:influence we found that phonetic spellings and abbreviations exhibit complex contagion, while lexical words do not. Here, however, we found no such systematic differences between the three linguistic classes. Although we hypothesize that lexical words propagate mainly outside of social media, we nonetheless see that when these words do propagate across Twitter, their adoption is modulated by tie strength, as is the case for phonetic spellings and abbreviations."
],
[
"Our results in sec:influence demonstrate that language change in social media can be viewed as a form of information diffusion across a social network. Moreover, this diffusion is modulated by a number of sociolinguistic factors. For non-lexical words, such as phonetic spellings and abbreviations, we find evidence of complex contagion: the likelihood of their adoption increases with the number of exposures. For both lexical and non-lexical words, we find evidence that the linguistic influence exerted across densely embedded ties is greater than the linguistic influence exerted across other ties. In contrast, we find no evidence to support the hypothesis that geographically local ties are more influential.",
"Overall, these findings indicate that language change is not merely a process of random diffusion over an undifferentiated social network, as proposed in many simulation studies BIBREF47 , BIBREF48 , BIBREF49 . Rather, some social network connections matter more than others, and social judgments have a role to play in modulating language change. In turn, this conclusion provides large-scale quantitative support for earlier findings from ethnographic studies. A logical next step would be to use these insights to design more accurate simulation models, which could be used to reveal long-term implications for language variation and change.",
"Extending our study beyond North America is a task for future work. Social networks vary dramatically across cultures, with traditional societies tending toward networks with fewer but stronger ties BIBREF3 . The social properties of language variation in these societies may differ as well. Another important direction for future work is to determine the impact of exogenous events, such as the appearance of new linguistic forms in mass media. Exogeneous events pose potential problems for estimating both infection risks and social influence. However, it may be possible to account for these events by incorporating additional data sources, such as search trends. Finally, we plan to use our framework to study the spread of terminology and ideas through networks of scientific research articles. Here too, authors may make socially motivated decisions to adopt specific terms and ideas BIBREF50 . The principles behind these decisions might therefore be revealed by an analysis of linguistic events propagating over a social network."
]
],
"section_name": [
"Introduction",
"Data",
"Linguistic Markers",
"Social network",
"Geography",
"Language Change as Social Influence",
"Social Evaluation of Language Variation",
"Tie Strength",
"Geographic Locality",
"Language Change as a Self-exciting Point Process",
"Parametric Hawkes Process",
"Objective Function",
"Gradients",
"Coordinate Ascent",
"Results",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"3a3be4e57a909e1dba94371ffaa353a802053e61",
"b881296ab4a6e1fa40aeac04eba02be6fa1fcb4c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"101dbdd2108b3e676061cb693826f0959b47891b"
]
},
{
"annotation_id": [
"014fa1f197cf470c73e7b4801b70273a459fb55a",
"0cea158a45702c550bf1a27490d54811407a9e6f",
"59be594c1974a2f8025cbdef2481c4bb605e5103"
],
"answer": [
{
"evidence": [
"Social networks are often characterized in terms of strong and weak ties BIBREF37 , BIBREF3 , with strong ties representing more important social relationships. Strong ties are often densely embedded, meaning that the nodes in question share many mutual friends; in contrast, weak ties often bridge disconnected communities. Bakshy et al. investigated the role of weak ties in information diffusion, through resharing of URLs on Facebook BIBREF38 . They found that URLs shared across strong ties are more likely to be reshared. However, they also found that weak ties play an important role, because users tend to have more weak ties than strong ties, and because weak ties are more likely to be a source of new information. In some respects, language change is similar to traditional information diffusion scenarios, such as resharing of URLs. But, in contrast, language connects with personal identity on a much deeper level than a typical URL. As a result, strong, deeply embedded ties may play a greater role in enforcing community norms.",
"We quantify tie strength in terms of embeddedness. Specifically, we use the normalized mutual friends metric introduced by Adamic and Adar BIBREF39 :"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Social networks are often characterized in terms of strong and weak ties BIBREF37 , BIBREF3 , with strong ties representing more important social relationships. Strong ties are often densely embedded, meaning that the nodes in question share many mutual friends; in contrast, weak ties often bridge disconnected communities. Bakshy et al.",
"We quantify tie strength in terms of embeddedness. Specifically, we use the normalized mutual friends metric introduced by Adamic and Adar BIBREF39 :"
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We quantify tie strength in terms of embeddedness. Specifically, we use the normalized mutual friends metric introduced by Adamic and Adar BIBREF39 :"
],
"extractive_spans": [],
"free_form_answer": "Yes, a normalized mutual friends metric",
"highlighted_evidence": [
"We quantify tie strength in terms of embeddedness. Specifically, we use the normalized mutual friends metric introduced by Adamic and Adar BIBREF39 :"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We quantify tie strength in terms of embeddedness. Specifically, we use the normalized mutual friends metric introduced by Adamic and Adar BIBREF39 :",
"$$s_{i,j} = \\sum _{k \\in \\Gamma (i) \\cap \\Gamma (j)} \\frac{1}{\\log \\left( \\#| \\Gamma (k)|\\right)},$$ (Eq. 28)",
"where, in our setting, $\\Gamma (i)$ is the set of users connected to $i$ in the Twitter mutual-reply network and $\\#|\\Gamma (i)|$ is the size of this set. This metric rewards dyads for having many mutual friends, but counts mutual friends more if their degrees are low—a high-degree mutual friend is less informative than one with a lower-degree. Given this definition, we can form the following hypothesis:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We quantify tie strength in terms of EMBEDDEDNESS.",
"pecifically, we use the normalized mutual friends metric introduced by Adamic and Adar BIBREF39 :\n\n$$s_{i,j} = \\sum _{k \\in \\Gamma (i) \\cap \\Gamma (j)} \\frac{1}{\\log \\left( \\#| \\Gamma (k)|\\right)},$$ (Eq. 28)\n\nwhere, in our setting, $\\Gamma (i)$ is the set of users connected to $i$ in the Twitter mutual-reply network and $\\#|\\Gamma (i)|$ is the size of this set."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"3f36bf0444f095984404b6a31a9210ea510697ab"
]
},
{
"annotation_id": [
"63f900a354db553c5eae6a65fcfdf8826b8d00f7",
"f160a3b9b480353d860d7888e472f7d0a2eff7f6"
],
"answer": [
{
"evidence": [
"The explosive rise in popularity of social media has led to an increase in linguistic diversity and creativity BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF1 , BIBREF18 , affecting written language at all levels, from spelling BIBREF19 all the way up to grammatical structure BIBREF20 and semantic meaning across the lexicon BIBREF21 , BIBREF22 . Here, we focus on the most easily observable and measurable level: variation and change in the use of individual words.",
"We take as our starting point words that are especially characteristic of eight cities in the United States. We chose these cities to represent a wide range of geographical regions, population densities, and demographics. We identified the following words as geographically distinctive markers of their associated cities, using SAGE BIBREF23 . Specifically, we followed the approach previously used by Eisenstein to identify community-specific terms in textual corpora BIBREF24 .",
"ain, ard, asl, inna, and yeen are non-standard spellings that are based on phonetic variation by region, demographics, or situation."
],
"extractive_spans": [],
"free_form_answer": "variation and change in the use of words characteristic from eight US cities that have non-standard spellings",
"highlighted_evidence": [
" Here, we focus on the most easily observable and measurable level: variation and change in the use of individual words.",
"We take as our starting point words that are especially characteristic of eight cities in the United States. We chose these cities to represent a wide range of geographical regions, population densities, and demographics. We identified the following words as geographically distinctive markers of their associated cities, using SAGE BIBREF23 . Specifically, we followed the approach previously used by Eisenstein to identify community-specific terms in textual corpora BIBREF24 .",
"ain, ard, asl, inna, and yeen are non-standard spellings that are based on phonetic variation by region, demographics, or situation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Several of these words were undergoing widespread growth in popularity around the time period spanned by our data set. For example, the frequencies of ard, asl, hella, and tfti more than tripled between 2012 and 2013. Our main research question is whether and how these words spread through Twitter. For example, lexical words are mainly transmitted through speech. We would expect their spread to be only weakly correlated with the Twitter social network. In contrast, abbreviations are fundamentally textual in nature, so we would expect their spread to correlate much more closely with the Twitter social network."
],
"extractive_spans": [
"phonetic spelling",
"abbreviation",
"lexical words"
],
"free_form_answer": "",
"highlighted_evidence": [
"For example, lexical words are mainly transmitted through speech. We would expect their spread to be only weakly correlated with the Twitter social network. In contrast, abbreviations are fundamentally textual in nature, so we would expect their spread to correlate much more closely with the Twitter social network."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"101dbdd2108b3e676061cb693826f0959b47891b"
]
},
{
"annotation_id": [
"56b82311a9f24e6dbe6e1f903b29c6e5078b7baa"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"101dbdd2108b3e676061cb693826f0959b47891b"
]
}
],
"nlp_background": [
"infinity",
"two",
"two",
"two"
],
"paper_read": [
"no",
"yes",
"yes",
"yes"
],
"question": [
"Does the paper discuss limitations of considering only data from Twitter?",
"Did they represent tie strength only as number of social ties in a networks? ",
"What sociolinguistic variables (phonetic spellings) did they analyze? ",
"What older dialect markers did they explore?"
],
"question_id": [
"cdb211be0340bb18ba5a9ee988e9df0e2ba8b793",
"4cb2e80da73ae36de372190b4c1c490b72977ef8",
"a064337bafca8cf01e222950ea97ebc184c47bc0",
"993d5bef2bf1c0cd537342ef76d4b952f0588b83"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social",
"sociolinguistics ",
"sociolinguistics ",
"sociolinguistics "
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Degree distributions for our mutual-reply network and “in” and “out” networks.",
"Fig. 2. Relative infection risks for words in each of the three linguistic classes defined in § 2.1. The figure depicts 95% confidence intervals, computed using the shuffle test [4].",
"Fig. 3. Improvement in goodness of fit from adding in features F3 (tie strength) and F4 (local). The dotted line corresponds to the threshold for statistical significance at p < 0.05 using a likelihood ratio test with the Benjamini-Hochberg correction."
],
"file": [
"5-Figure1-1.png",
"6-Figure2-1.png",
"13-Figure3-1.png"
]
} | [
"Did they represent tie strength only as number of social ties in a networks? ",
"What sociolinguistic variables (phonetic spellings) did they analyze? "
] | [
[
"1609.02075-Tie Strength-0",
"1609.02075-Tie Strength-3"
],
[
"1609.02075-Linguistic Markers-1",
"1609.02075-Linguistic Markers-14",
"1609.02075-Linguistic Markers-0",
"1609.02075-Linguistic Markers-12"
]
] | [
"Yes, a normalized mutual friends metric",
"variation and change in the use of words characteristic from eight US cities that have non-standard spellings"
] | 215 |
1708.09025 | Unsupervised Terminological Ontology Learning based on Hierarchical Topic Modeling | In this paper, we present hierarchical relationbased latent Dirichlet allocation (hrLDA), a data-driven hierarchical topic model for extracting terminological ontologies from a large number of heterogeneous documents. In contrast to traditional topic models, hrLDA relies on noun phrases instead of unigrams, considers syntax and document structures, and enriches topic hierarchies with topic relations. Through a series of experiments, we demonstrate the superiority of hrLDA over existing topic models, especially for building hierarchies. Furthermore, we illustrate the robustness of hrLDA in the settings of noisy data sets, which are likely to occur in many practical scenarios. Our ontology evaluation results show that ontologies extracted from hrLDA are very competitive with the ontologies created by domain experts. | {
"paragraphs": [
[
"Although researchers have made significant progress on knowledge acquisition and have proposed many ontologies, for instance, WordNet BIBREF0 , DBpedia BIBREF1 , YAGO BIBREF2 , Freebase, BIBREF3 Nell BIBREF4 , DeepDive BIBREF5 , Domain Cartridge BIBREF6 , Knowledge Vault BIBREF7 , INS-ES BIBREF8 , iDLER BIBREF9 , and TransE-NMM BIBREF10 , current ontology construction methods still rely heavily on manual parsing and existing knowledge bases. This raises challenges for learning ontologies in new domains. While a strong ontology parser is effective in small-scale corpora, an unsupervised model is beneficial for learning new entities and their relations from new data sources, and is likely to perform better on larger corpora.",
"In this paper, we focus on unsupervised terminological ontology learning and formalize a terminological ontology as a hierarchical structure of subject-verb-object triplets. We divide a terminological ontology into two components: topic hierarchies and topic relations. Topics are presented in a tree structure where each node is a topic label (noun phrase), the root node represents the most general topic, the leaf nodes represent the most specific topics, and every topic is composed of its topic label and its descendant topic labels. Topic hierarchies are preserved in topic paths, and a topic path connects a list of topics labels from the root to a leaf. Topic relations are semantic relationships between any two topics or properties used to describe one topic. Figure FIGREF1 depicts an example of a terminological ontology learned from a corpus about European cities. We extract terminological ontologies by applying unsupervised hierarchical topic modeling and relation extraction to plain text.",
"Topic modeling was originally used for topic extraction and document clustering. The classical topic model, latent Dirichlet allocation (LDA) BIBREF11 , simplifies a document as a bag of its words and describes a topic as a distribution of words. Prior research BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 has shown that LDA-based approaches are adequate for (terminological) ontology learning. However, these models are deficient in that they still need human supervision to decide the number of topics, and to pick meaningful topic labels usually from a list of unigrams. Among models not using unigrams, LDA-based Global Similarity Hierarchy Learning (LDA+GSHL) BIBREF13 only extracts a subset of relations: “broader\" and “related\" relations. In addition, the topic hierarchies of KB-LDA BIBREF17 rely on hypernym-hyponym pairs capturing only a subset of hierarchies.",
"Considering the shortcomings of the existing methods, the main objectives of applying topic modeling to ontology learning are threefold.",
"To achieve the first objective, we extract noun phrases and then propose a sampling method to estimate the number of topics. For the second objective, we use language parsing and relation extraction to learn relations for the noun phrases. Regarding the third objective, we adapt and improve the hierarchical latent Dirichlet allocation (hLDA) model BIBREF19 , BIBREF20 . hLDA is not ideal for ontology learning because it builds topics from unigrams (which are not descriptive enough to serve as entities in ontologies) and the topics may contain words from multiple domains when input data have documents from many domains (see Section SECREF2 and Figure FIGREF55 ). Our model, hrLDA, overcomes these deficiencies. In particular, hrLDA represents topics with noun phrases, uses syntax and document structures such as paragraph indentations and item lists, assigns multiple topic paths for every document, and allows topic trees to grow vertically and horizontally.",
"The primary contributions of this work can be specified as follows.",
"The rest of this paper is organized into five parts. In Section 2, we provide a brief background of hLDA. In Section 3, we present our hrLDA model and the ontology generation method. In Section 4, we demonstrate empirical results regarding topic hierarchies and generated terminological ontologies. Finally, in Section 5, we present some concluding remarks and discuss avenues for future work and improvements."
],
[
"In this section, we introduce our main baseline model, hierarchical latent Dirichlet allocation (hLDA), and some of its extensions. We start from the components of hLDA - latent Dirichlet allocation (LDA) and the Chinese Restaurant Process (CRP)- and then explain why hLDA needs improvements in both building hierarchies and drawing topic paths.",
"LDA is a three-level Bayesian model in which each document is a composite of multiple topics, and every topic is a distribution over words. Due to the lack of determinative information, LDA is unable to distinguish different instances containing the same content words, (e.g. “I trimmed my polished nails\" and “I have just hammered many rusty nails\"). In addition, in LDA all words are probabilistically independent and equally important. This is problematic because different words and sentence elements should have different contributions to topic generation. For instance, articles contribute little compared to nouns, and sentence subjects normally contain the main topics of a document.",
"Introduced in hLDA, CRP partitions words into several topics by mimicking a process in which customers sit down in a Chinese restaurant with an infinite number of tables and an infinite number of seats per table. Customers enter one by one, with a new customer choosing to sit at an occupied table or a new table. The probability of a new customer sitting at the table with the largest number of customers is the highest. In reality, customers do not always join the largest table but prefer to dine with their acquaintances. The theory of distance-dependent CRP was formerly proposed by David Blei BIBREF21 . We provide later in Section SECREF15 an explicit formula for topic partition given that adjacent words and sentences tend to deal with the same topics.",
"hLDA combines LDA with CRP by setting one topic path with fixed depth INLINEFORM0 for each document. The hierarchical relationships among nodes in the same path depend on an INLINEFORM1 dimensional Dirichlet distribution that actually arranges the probabilities of topics being on different topic levels. Despite the fact that the single path was changed to multiple paths in some extensions of hLDA - the nested Chinese restaurant franchise processes BIBREF22 and the nested hierarchical Dirichlet Processes BIBREF23 , - this topic path drawing strategy puts words from different domains into one topic when input data are mixed with topics from multiple domains. This means that if a corpus contains documents in four different domains, hLDA is likely to include words from the four domains in every topic (see Figure FIGREF55 ). In light of the various inadequacies discussed above, we propose a relation-based model, hrLDA. hrLDA incorporates semantic topic modeling with relation extraction to integrate syntax and has the capacity to provide comprehensive hierarchies even in corpora containing mixed topics."
],
[
"The main problem we address in this section is generating terminological ontologies in an unsupervised fashion. The fundamental concept of hrLDA is as follows. When people construct a document, they start with selecting several topics. Then, they choose some noun phrases as subjects for each topic. Next, for each subject they come up with relation triplets to describe this subject or its relationships with other subjects. Finally, they connect the subject phrases and relation triplets to sentences via reasonable grammar. The main topic is normally described with the most important relation triplets. Sentences in one paragraph, especially adjacent sentences, are likely to express the same topic.",
"We begin by describing the process of reconstructing LDA. Subsequently, we explain relation extraction from heterogeneous documents. Next, we propose an improved topic partition method over CRP. Finally, we demonstrate how to build topic hierarchies that bind with extracted relation triplets."
],
[
"Documents are typically composed of chunks of texts, which may be referred to as sections in Word documents, paragraphs in PDF documents, slides in presentation documents, etc. Each chunk is composed of multiple sentences that are either atomic or complex in structure, which means a document is also a collection of atomic and/or complex sentences. An atomic sentence (see module INLINEFORM0 in Figure FIGREF10 ) is a sentence that contains only one subject ( INLINEFORM1 ), one object ( INLINEFORM2 ) and one verb ( INLINEFORM3 ) between the subject and the object. For every atomic sentence whose object is also a noun phrase, there are at least two relation triplets (e.g., “The tiger that gave the excellent speech is handsome\" has relation triplets: (tiger, give, speech), (speech, be given by, tiger), and (tiger, be, handsome)). By contrast, a complex sentence can be subdivided into multiple atomic sentences. Given that the syntactic verb in a relation triplet is determined by the subject and the object, a document INLINEFORM4 in a corpus INLINEFORM5 can be ultimately reduced to INLINEFORM6 subject phrases (we convert objects to subjects using passive voice) associated with INLINEFORM7 relation triplets INLINEFORM8 . Number INLINEFORM9 is usually larger than the actual number of noun phrases in document INLINEFORM10 . By replacing the unigrams in LDA with relation triplets, we retain definitive information and assign salient noun phrases high weights.",
"We define INLINEFORM0 as a Dirichlet distribution parameterized by hyperparameters INLINEFORM1 , INLINEFORM2 as a multinomial distribution parameterized by hyperparameters INLINEFORM3 , INLINEFORM4 as a Dirichlet distribution parameterized by INLINEFORM5 , and INLINEFORM6 as a multinomial distribution parameterized by INLINEFORM7 . We assume the corpus has INLINEFORM8 topics. Assigning INLINEFORM9 topics to the INLINEFORM10 relation triplets of document INLINEFORM11 follows a multinomial distribution INLINEFORM12 with prior INLINEFORM13 . Selecting the INLINEFORM14 relation triplets for document INLINEFORM15 given the INLINEFORM16 topics follows a multinomial distribution INLINEFORM17 with prior INLINEFORM18 . We denote INLINEFORM19 as the list of relation triplet lists extracted from all documents in the corpus, and INLINEFORM20 as the list of topic assignments of INLINEFORM21 . We denote the relation triplet counts of documents in the corpus by INLINEFORM22 . The graphical representation of the relation-based latent Dirichlet allocation (rLDA) model is illustrated in Figure FIGREF10 .",
"The plate notation can be decomposed into two types of Dirichlet-multinomial conjugated structures: document-topic distribution INLINEFORM0 and topic-relation distribution INLINEFORM1 . Hence, the joint distribution of INLINEFORM2 and INLINEFORM3 can be represented as DISPLAYFORM0 ",
"where INLINEFORM0 is the number of unique relations in all documents, INLINEFORM1 is the number of occurrences of the relation triplet INLINEFORM2 generated by topic INLINEFORM3 in all documents, and INLINEFORM4 is the number of relation triplets generated by topic INLINEFORM5 in document INLINEFORM6 . INLINEFORM7 is a conjugate prior for INLINEFORM8 and thus the posterior distribution is a new Dirichlet distribution parameterized by INLINEFORM9 . The same rule applies to INLINEFORM10 ."
],
[
"Extracting relation triplets is the essential step of hrLDA, and it is also the key process for converting a hierarchical topic tree to an ontology structure. The idea is to find all syntactically related noun phrases and their connections using a language parser such as the Stanford NLP parser BIBREF24 and Ollie BIBREF25 . Generally, there are two types of relation triplets:",
"Subject-predicate-object-based relations,",
"e.g., New York is the largest city in the United States INLINEFORM0 (New York, be the largest city in, the United States);",
"Noun-based/hidden relations,",
"e.g., Queen Elizabeth INLINEFORM0 (Elizabeth, be, queen).",
"A special type of relation triplets can be extracted from presentation documents such as those written in PowerPoint using document structures. Normally lines in a slide are not complete sentences, which means language parsing does not work. However, indentations and bullet types usually express inclusion relationships between adjacent lines. Starting with the first line in an itemized section, our algorithm scans the content in a slide line by line, and creates relations based on the current item and the item that is one level higher."
],
[
"As mentioned in Section 2, CRP always assigns the highest probability to the largest table, which assumes customers are more likely to sit at the table that has the largest number of customers. This ignores the social reality that a person is more willing to choose the table where his/her closest friend is sitting even though the table also seats unknown people who are actually friends of friends. Similarly with human-written documents, adjacent sentences usually describe the same topics. We consider a restaurant table as a topic, and a person sitting at any of the tables as a noun phrase. In order to penalize the largest topic and assign high probabilities to adjacent noun phrases being in the same topics, we introduce an improved partition method, Acquaintance Chinese Restaurant Process (ACRP).",
"The ultimate purposes of ACRP are to estimate INLINEFORM0 , the number of topics for rLDA, and to set the initial topic distribution states for rLDA. Suppose a document is read from top to bottom and left to right. As each noun phrase belongs to one sentence and one text chunk (e.g., section, paragraph and slide), the locations of all noun phrases in a document can be mapped to a two-dimensional space where sentence location is the x axis and text chunk location is the y axis (the first noun phrase of a document holds value (0, 0)). More specifically, every noun phrase has four attributes: content, location, one-to-many relation triplets, and document ID. Noun phrases in the same text chunk are more likely to be “acquaintances;\" they are even closer to each other if they are in the same sentence. In contrast to CRP, ACRP assigns probabilities based on closeness, which is specified in the following procedure.",
"Let INLINEFORM0 be the integer-valued random variable corresponding to the index of a topic assigned to the INLINEFORM1 phrase. Draw a probability INLINEFORM2 from Equations EQREF18 to EQREF25 below for the INLINEFORM3 noun phrase INLINEFORM4 , joining each of the existing INLINEFORM5 topics and the new INLINEFORM6 topic given the topic assignments of previous INLINEFORM7 noun phrases, INLINEFORM8 . If a noun phrase joins any of the existing k topics, we denote the corresponding topic index by INLINEFORM9 .",
"The probability of choosing the INLINEFORM0 topic: DISPLAYFORM0 ",
"The probability of selecting any of the INLINEFORM0 topics:",
"if the content of INLINEFORM0 is synonymous with or an acronym of a previously analyzed noun phrase INLINEFORM1 INLINEFORM2 in the INLINEFORM3 topic, DISPLAYFORM0 ",
"else if the document ID of INLINEFORM0 is different from all document IDs belonging to the INLINEFORM1 topic, DISPLAYFORM0 ",
"otherwise, DISPLAYFORM0 ",
"where INLINEFORM0 refers to the current number of noun phrases in the INLINEFORM1 topic, INLINEFORM2 represents the vector of chunk location differences of the INLINEFORM3 noun phrase and all members in the INLINEFORM4 topic, INLINEFORM5 stands for the vector of sentence location differences, and INLINEFORM6 is a penalty factor.",
"Normalize the ( INLINEFORM0 ) probabilities to guarantee they are each in the range of [0, 1] and their sum is equal to 1.",
"Based on the probabilities EQREF18 to EQREF25 , we sample a topic index INLINEFORM0 from INLINEFORM1 for every noun phrase, and we count the number of unique topics INLINEFORM2 in the end. We shuffle the order of documents and iterate ACRP until INLINEFORM3 is unchanged."
],
[
"The procedure for extending ACRP to hierarchies is essential to why hrLDA outperforms hLDA. Instead of a predefined tree depth INLINEFORM0 , the tree depth for hrLDA is optional and data-driven. More importantly, clustering decisions are made given a global distribution of all current non-partitioned phrases (leaves) in our algorithm. This means there can be multiple paths traversed down a topic tree for each document. With reference to the topic tree, every node has a noun phrase as its label and represents a topic that may have multiple sub-topics. The root node is visited by all phrases. In practice, we do not link any phrases to the root node, as it contains the entire vocabulary. An inner node of a topic tree contains a selected topic label. A leaf node contains an unprocessed noun phrase. We define a hashmap INLINEFORM1 with a document ID as the key and the current leaf nodes of the document as the value. We denote the current tree level by INLINEFORM2 . We next outline the overall algorithm.",
"We start with the root node ( INLINEFORM0 ) and apply rLDA to all the documents in a corpus.",
"Collect the current leaf nodes of every document. INLINEFORM0 initially contains all noun phrases in the corpus. Assign a cluster partition to the leaf nodes in each document based on ACRP and sample the cluster partition until the number of topics of all noun phrases in INLINEFORM1 is stable or the iteration reaches the predefined number of iteration times (whichever occurs first).",
"Mark the number of topics (child nodes) of parent node INLINEFORM0 at level INLINEFORM1 as INLINEFORM2 . Build a INLINEFORM3 - dimensional topic proportion vector INLINEFORM4 based on INLINEFORM5 .",
"For every noun phrase INLINEFORM0 in document INLINEFORM1 , form the topic assignments INLINEFORM2 based on INLINEFORM3 .",
"Generate relation triplets from INLINEFORM0 given INLINEFORM1 and the associated topic vector INLINEFORM2 .",
"Eliminate partitioned leaf nodes from INLINEFORM0 . Update the current level INLINEFORM1 by 1.",
"If phrases in INLINEFORM0 are not yet completely partitioned to the next level and INLINEFORM1 is less than INLINEFORM2 , continue the following steps. For each leaf node, we set the top phrase (i.e., the phrase having the highest probability) as the topic label of this leaf node and the leaf node becomes an inner node. We next update INLINEFORM3 and repeat procedures INLINEFORM4 .",
"To summarize this process more succinctly: we build the topic hierarchies with rLDA in a divisive way (see Figure FIGREF35 ). We start with the collection of extracted noun phrases and split them using rLDA and ACRP. Then, we apply the procedure recursively until each noun phrase is selected as a topic label. After every rLDA assignment, each inner node only contains the topic label (top phrase), and the rest of the phrases are divided into nodes at the next level using ACRP and rLDA. Hence, we build a topic tree with each node as a topic label (noun phrase), and each topic is composed of its topic labels and the topic labels of the topic's descendants. In the end, we finalize our terminological ontology by linking the extracted relation triplets with the topic labels as subjects.",
"We use collapsed Gibbs sampling BIBREF26 for inference from posterior distribution INLINEFORM0 based on Equation EQREF11 . Assume the INLINEFORM1 noun phrase INLINEFORM2 in parent node INLINEFORM3 comes from document INLINEFORM4 . We denote unassigned noun phrases from document INLINEFORM5 in parent node INLINEFORM6 by INLINEFORM7 , and unique noun phrases in parent node INLINEFORM8 by INLINEFORM9 . We simplify the probability of assigning the INLINEFORM10 noun phrase in parent node INLINEFORM11 to topic INLINEFORM12 among INLINEFORM13 topics as DISPLAYFORM0 ",
"where INLINEFORM0 refers to all topic assignments other than INLINEFORM1 , INLINEFORM2 is multinational document-topic distribution for unassigned noun phrases INLINEFORM3 , INLINEFORM4 is the multinational topic-relation distribution for topic INLINEFORM5 , INLINEFORM6 is the number of occurrences of noun phrase INLINEFORM7 in topic INLINEFORM8 except the INLINEFORM9 noun phrase in INLINEFORM10 , INLINEFORM11 stands for the number of times that topic INLINEFORM12 occurs in INLINEFORM13 excluding the INLINEFORM14 noun phrase in INLINEFORM15 . The time complexity of hrLDA is INLINEFORM16 , where INLINEFORM17 is the number of topics at level INLINEFORM18 . The space complexity is INLINEFORM19 .",
"In order to build a hierarchical topic tree of a specific domain, we must generate a subset of the relation triplets using external constraints or semantic seeds via a pruning process BIBREF27 . As mentioned above, in a relation triplet, each relation connects one subject and one object. By assembling all subject and object pairs, we can build an undirected graph with the objects and the subjects constituting the nodes of the graph BIBREF28 . Given one or multiple semantic seeds as input, we first collect a set of nodes that are connected to the seed(s), and then take the relations from the set of nodes as input to retrieve associated subject and object pairs. This process constitutes one recursive step. The subject and object pairs become the input of the subsequent recursive step."
],
[
"We utilized the Apache poi library to parse texts from pdfs, word documents and presentation files; the MALLET toolbox BIBREF29 for the implementations of LDA, optimized_LDA BIBREF30 and hLDA; the Apache Jena library to add relations, properties and members to hierarchical topic trees; and Stanford Protege for illustrating extracted ontologies. We make our code and data available . We used the same empirical hyper-parameter setting (i.e., INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 ) across all our experiments. We then demonstrate the evaluation results from two aspects: topic hierarchy and ontology rule."
],
[
"In this section, we present the evaluation results of hrLDA tested against optimized_LDA, hLDA, and phrase_hLDA (i.e., hLDA based on noun phrases) as well as ontology examples that hrLDA extracted from real-world text data. The entire corpus we generated contains 349,362 tokens (after removing stop words and cleaning) and is built from articles on INLINEFORM0 INLINEFORM1 . It includes 84 presentation files, articles from 1,782 Wikipedia pages and 3,000 research papers that were published in IEEE manufacturing conference proceedings within the last decade. In order to see the performance in data sets of different scales, we also used a smaller corpus Wiki that holds the articles collected from the Wikipedia pages only.",
"We extract a single level topic tree using each of the four models; hrLDA becomes rLDA, and phrase_hLDA becomes phrase-based LDA. We have tested the average perplexity and running time performance of ten independent runs on each of the four models BIBREF31 , BIBREF32 . Equation EQREF41 defines the perplexity, which we employed as an empirical measure. DISPLAYFORM0 ",
"where INLINEFORM0 is a vector containing the INLINEFORM1 relation triplets in document INLINEFORM2 , and INLINEFORM3 is the topic assignment for INLINEFORM4 .",
"The comparison results on our Wiki corpus are shown in Figure FIGREF42 . hrLDA yields the lowest perplexity and reasonable running time. As the running time spent on parameter optimization is extremely long (the optimized_LDA requires 19.90 hours to complete one run), for efficiency, we adhere to the fixed parameter settings for hrLDA.",
"Superiority",
"Figures FIGREF43 to FIGREF49 illustrates the perplexity trends of the three hierarchical topic models (i.e., hrLDA, phrase_hLDA and hLDA) applied to both the Wiki corpus and the entire corpus with INLINEFORM0 “chip\" given different level settings. From left to right, hrLDA retains the lowest perplexities compared with other models as the corpus size grows. Furthermore, from top to bottom, hrLDA remains stable as the topic level increases, whereas the perplexity of phrase_hLDA and especially the perplexity of hLDA become rapidly high. Figure FIGREF52 highlights the perplexity values of the three models with confidence intervals in the final state. As shown in the two types of experiments, hrLDA has the lowest average perplexities and smallest confidence intervals, followed by phrase_hLDA, and then hLDA.",
"Our interpretation is that hLDA and phrase_hLDA tend to assign terms to the largest topic and thus do not guarantee that each topic path contains terms with similar meaning.",
"Robustness",
"Figure FIGREF55 shows exhaustive hierarchical topic trees extracted from a small text sample with topics from four domains: INLINEFORM0 , INLINEFORM1 INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 . hLDA tends to mix words from different domains into one topic. For instance, words on the first level of the topic tree come from all four domains. This is because the topic path drawing method in existing hLDA-based models takes words in the most important topic of every document and labels them as the main topic of the corpus. In contrast, hrLDA is able to create four big branches for the four domains from the root. Hence, it generates clean topic hierarchies from the corpus."
],
[
"The visualization of one concrete ontology on the INLINEFORM0 INLINEFORM1 domain is presented in Figure FIGREF60 . For instance, Topic packaging contains topic integrated circuit packaging, and topic label jedec is associated with relation triplet (jedec, be short for, joint electron device engineering council).",
"We use KB-LDA, phrase_hLDA, and LDA+GSHL as our baseline methods, and compare ontologies extracted from hrLDA, KB-LDA, phrase_hLDA, and LDA+GSHL with DBpedia ontologies. We use precision, recall and F-measure for this ontology evaluation. A true positive case is an ontology rule that can be found in an extracted ontology and the associated ontology of DBpedia. A false positive case is an incorrectly identified ontology rule. A false negative case is a missed ontology rule. Table TABREF61 shows the evaluation results of ontologies extracted from Wikipedia articles pertaining to European Capital Cities (Corpus E), Office Buildings in Chicago (Corpus O) and Birds of the United States (Corpus B) using hrLDA, KB-LDA, phrase_hLDA (tree depth INLINEFORM0 = 3), and LDA+GSHL in contrast to these gold ontologies belonging to DBpedia. The three corpora used in this evaluation were collected from Wikipedia abstracts, the same text source of DBpedia. The seeds of hrLDA and the root concepts of LDA+GSHL are capital, building, and bird. For both KB-LDA and phrase_hLDA we kept the top five tokens in each topic as each node of their topic trees is a distribution/list of phrases. hrLDA achieves the highest precision and F-measure scores in the three experiments compared to the other models. KB-LDA performs better than phrase_hLDA and LDA+GSHL, and phrase_hLDA performs similarly to LDA+GSHL. In general, hrLDA works well especially when the pre-knowledge already exists inside the corpora. Consider the following two statements taken from the corpus on Birds of the United States as an example. In order to use two short documents “The Acadian flycatcher is a small insect-eating bird.\" and “The Pacific loon is a medium-sized member of the loon.\" to infer that the Acadian flycatcher and the Pacific loon are both related to topic bird, the pre-knowledge that “the loon is a species of bird\" is required for hrLDA. This example explains why the accuracy of extracting ontologies from this kind of corpus is low."
],
[
"In this paper, we have proposed a completely unsupervised model, hrLDA, for terminological ontology learning. hrLDA is a domain-independent and self-learning model, which means it is very promising for learning ontologies in new domains and thus can save significant time and effort in ontology acquisition.",
"We have compared hrLDA with popular topic models to interpret how our algorithm learns meaningful hierarchies. By taking syntax and document structures into consideration, hrLDA is able to extract more descriptive topics. In addition, hrLDA eliminates the restrictions on the fixed topic tree depth and the limited number of topic paths. Furthermore, ACRP allows hrLDA to create more reasonable topics and to converge faster in Gibbs sampling.",
"We have also compared hrLDA to several unsupervised ontology learning models and shown that hrLDA can learn applicable terminological ontologies from real world data. Although hrLDA cannot be applied directly in formal reasoning, it is efficient for building knowledge bases for information retrieval and simple question answering. Also, hrLDA is sensitive to the quality of extracted relation triplets. In order to give optimal answers, hrLDA should be embedded in more complex probabilistic modules to identify true facts from extracted ontology rules. Finally, one issue we have not addressed in our current study is capturing pre-knowledge. Although a direct solution would be adding the missing information to the data set, a more advanced approach would be to train topic embeddings to extract hidden semantics."
],
[
"This work was supported in part by Intel Corporation, Semiconductor Research Corporation (SRC). We are obliged to Professor Goce Trajcevski from Northwestern University for his insightful suggestions and discussions. This work was partly conducted using the Protege resource, which is supported by grant GM10331601 from the National Institute of General Medical Sciences of the United States National Institutes of Health."
]
],
"section_name": [
"Introduction",
"Background",
"Hierarchical Relation-based Latent Dirichlet Allocation",
"Relation-based Latent Dirichlet Allocation",
"Relation Triplet Extraction",
"Acquaintance Chinese Restaurant Process",
"Nested Acquaintance Chinese Restaurant Process",
"Implementation",
"Hierarchy Evaluation",
"Gold Standard-based Ontology Evaluation",
"Concluding Remarks",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"a39e4b0839cdf692a0abd9c4e3fb2111b3746891",
"bec5140d7d590cc85a392b193177ddaf63295083"
],
"answer": [
{
"evidence": [
"Figure FIGREF55 shows exhaustive hierarchical topic trees extracted from a small text sample with topics from four domains: INLINEFORM0 , INLINEFORM1 INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 . hLDA tends to mix words from different domains into one topic. For instance, words on the first level of the topic tree come from all four domains. This is because the topic path drawing method in existing hLDA-based models takes words in the most important topic of every document and labels them as the main topic of the corpus. In contrast, hrLDA is able to create four big branches for the four domains from the root. Hence, it generates clean topic hierarchies from the corpus."
],
"extractive_spans": [],
"free_form_answer": "4",
"highlighted_evidence": [
"Figure FIGREF55 shows exhaustive hierarchical topic trees extracted from a small text sample with topics from four domains: INLINEFORM0 , INLINEFORM1 INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure FIGREF55 shows exhaustive hierarchical topic trees extracted from a small text sample with topics from four domains: INLINEFORM0 , INLINEFORM1 INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 . hLDA tends to mix words from different domains into one topic. For instance, words on the first level of the topic tree come from all four domains. This is because the topic path drawing method in existing hLDA-based models takes words in the most important topic of every document and labels them as the main topic of the corpus. In contrast, hrLDA is able to create four big branches for the four domains from the root. Hence, it generates clean topic hierarchies from the corpus."
],
"extractive_spans": [
"four domains"
],
"free_form_answer": "",
"highlighted_evidence": [
"Figure FIGREF55 shows exhaustive hierarchical topic trees extracted from a small text sample with topics from four domains: INLINEFORM0 , INLINEFORM1 INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 . hLDA tends to mix words from different domains into one topic. For instance, words on the first level of the topic tree come from all four domains. This is because the topic path drawing method in existing hLDA-based models takes words in the most important topic of every document and labels them as the main topic of the corpus. In contrast, hrLDA is able to create four big branches for the four domains from the root. Hence, it generates clean topic hierarchies from the corpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"291734afa11d25d92eecb3fcd87352a51d800b75",
"3c9294bb740c89aa855c2a47ce927ba803acd3ca"
],
"answer": [
{
"evidence": [
"hLDA combines LDA with CRP by setting one topic path with fixed depth INLINEFORM0 for each document. The hierarchical relationships among nodes in the same path depend on an INLINEFORM1 dimensional Dirichlet distribution that actually arranges the probabilities of topics being on different topic levels. Despite the fact that the single path was changed to multiple paths in some extensions of hLDA - the nested Chinese restaurant franchise processes BIBREF22 and the nested hierarchical Dirichlet Processes BIBREF23 , - this topic path drawing strategy puts words from different domains into one topic when input data are mixed with topics from multiple domains. This means that if a corpus contains documents in four different domains, hLDA is likely to include words from the four domains in every topic (see Figure FIGREF55 ). In light of the various inadequacies discussed above, we propose a relation-based model, hrLDA. hrLDA incorporates semantic topic modeling with relation extraction to integrate syntax and has the capacity to provide comprehensive hierarchies even in corpora containing mixed topics."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"hrLDA incorporates semantic topic modeling with relation extraction to integrate syntax and has the capacity to provide comprehensive hierarchies even in corpora containing mixed topics."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Extracting relation triplets is the essential step of hrLDA, and it is also the key process for converting a hierarchical topic tree to an ontology structure. The idea is to find all syntactically related noun phrases and their connections using a language parser such as the Stanford NLP parser BIBREF24 and Ollie BIBREF25 . Generally, there are two types of relation triplets:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Extracting relation triplets is the essential step of hrLDA, and it is also the key process for converting a hierarchical topic tree to an ontology structure. The idea is to find all syntactically related noun phrases and their connections using a language parser such as the Stanford NLP parser BIBREF24 and Ollie BIBREF25 ."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"016666a626b4393b9773f88eef511e8a530ffbc3",
"ef59ef874ab1cf6d2f5a5fd9a439d0b14267cc6b"
],
"answer": [
{
"evidence": [
"We use KB-LDA, phrase_hLDA, and LDA+GSHL as our baseline methods, and compare ontologies extracted from hrLDA, KB-LDA, phrase_hLDA, and LDA+GSHL with DBpedia ontologies. We use precision, recall and F-measure for this ontology evaluation. A true positive case is an ontology rule that can be found in an extracted ontology and the associated ontology of DBpedia. A false positive case is an incorrectly identified ontology rule. A false negative case is a missed ontology rule. Table TABREF61 shows the evaluation results of ontologies extracted from Wikipedia articles pertaining to European Capital Cities (Corpus E), Office Buildings in Chicago (Corpus O) and Birds of the United States (Corpus B) using hrLDA, KB-LDA, phrase_hLDA (tree depth INLINEFORM0 = 3), and LDA+GSHL in contrast to these gold ontologies belonging to DBpedia. The three corpora used in this evaluation were collected from Wikipedia abstracts, the same text source of DBpedia. The seeds of hrLDA and the root concepts of LDA+GSHL are capital, building, and bird. For both KB-LDA and phrase_hLDA we kept the top five tokens in each topic as each node of their topic trees is a distribution/list of phrases. hrLDA achieves the highest precision and F-measure scores in the three experiments compared to the other models. KB-LDA performs better than phrase_hLDA and LDA+GSHL, and phrase_hLDA performs similarly to LDA+GSHL. In general, hrLDA works well especially when the pre-knowledge already exists inside the corpora. Consider the following two statements taken from the corpus on Birds of the United States as an example. In order to use two short documents “The Acadian flycatcher is a small insect-eating bird.\" and “The Pacific loon is a medium-sized member of the loon.\" to infer that the Acadian flycatcher and the Pacific loon are both related to topic bird, the pre-knowledge that “the loon is a species of bird\" is required for hrLDA. This example explains why the accuracy of extracting ontologies from this kind of corpus is low."
],
"extractive_spans": [
"precision",
"recall",
"F-measure"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use KB-LDA, phrase_hLDA, and LDA+GSHL as our baseline methods, and compare ontologies extracted from hrLDA, KB-LDA, phrase_hLDA, and LDA+GSHL with DBpedia ontologies. We use precision, recall and F-measure for this ontology evaluation"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We utilized the Apache poi library to parse texts from pdfs, word documents and presentation files; the MALLET toolbox BIBREF29 for the implementations of LDA, optimized_LDA BIBREF30 and hLDA; the Apache Jena library to add relations, properties and members to hierarchical topic trees; and Stanford Protege for illustrating extracted ontologies. We make our code and data available . We used the same empirical hyper-parameter setting (i.e., INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 ) across all our experiments. We then demonstrate the evaluation results from two aspects: topic hierarchy and ontology rule.",
"We use KB-LDA, phrase_hLDA, and LDA+GSHL as our baseline methods, and compare ontologies extracted from hrLDA, KB-LDA, phrase_hLDA, and LDA+GSHL with DBpedia ontologies. We use precision, recall and F-measure for this ontology evaluation. A true positive case is an ontology rule that can be found in an extracted ontology and the associated ontology of DBpedia. A false positive case is an incorrectly identified ontology rule. A false negative case is a missed ontology rule. Table TABREF61 shows the evaluation results of ontologies extracted from Wikipedia articles pertaining to European Capital Cities (Corpus E), Office Buildings in Chicago (Corpus O) and Birds of the United States (Corpus B) using hrLDA, KB-LDA, phrase_hLDA (tree depth INLINEFORM0 = 3), and LDA+GSHL in contrast to these gold ontologies belonging to DBpedia. The three corpora used in this evaluation were collected from Wikipedia abstracts, the same text source of DBpedia. The seeds of hrLDA and the root concepts of LDA+GSHL are capital, building, and bird. For both KB-LDA and phrase_hLDA we kept the top five tokens in each topic as each node of their topic trees is a distribution/list of phrases. hrLDA achieves the highest precision and F-measure scores in the three experiments compared to the other models. KB-LDA performs better than phrase_hLDA and LDA+GSHL, and phrase_hLDA performs similarly to LDA+GSHL. In general, hrLDA works well especially when the pre-knowledge already exists inside the corpora. Consider the following two statements taken from the corpus on Birds of the United States as an example. In order to use two short documents “The Acadian flycatcher is a small insect-eating bird.\" and “The Pacific loon is a medium-sized member of the loon.\" to infer that the Acadian flycatcher and the Pacific loon are both related to topic bird, the pre-knowledge that “the loon is a species of bird\" is required for hrLDA. This example explains why the accuracy of extracting ontologies from this kind of corpus is low."
],
"extractive_spans": [
"We use KB-LDA, phrase_hLDA, and LDA+GSHL as our baseline methods, and compare ontologies extracted from hrLDA, KB-LDA, phrase_hLDA, and LDA+GSHL with DBpedia ontologies. We use precision, recall and F-measure for this ontology evaluation. A true positive case is an ontology rule that can be found in an extracted ontology and the associated ontology of DBpedia. A false positive case is an incorrectly identified ontology rule. A false negative case is a missed ontology rule. "
],
"free_form_answer": "",
"highlighted_evidence": [
"We utilized the Apache poi library to parse texts from pdfs, word documents and presentation files; the MALLET toolbox BIBREF29 for the implementations of LDA, optimized_LDA BIBREF30 and hLDA; the Apache Jena library to add relations, properties and members to hierarchical topic trees; and Stanford Protege for illustrating extracted ontologies. We make our code and data available . We used the same empirical hyper-parameter setting (i.e., INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 ) across all our experiments. We then demonstrate the evaluation results from two aspects: topic hierarchy and ontology rule.",
"We use KB-LDA, phrase_hLDA, and LDA+GSHL as our baseline methods, and compare ontologies extracted from hrLDA, KB-LDA, phrase_hLDA, and LDA+GSHL with DBpedia ontologies. We use precision, recall and F-measure for this ontology evaluation. A true positive case is an ontology rule that can be found in an extracted ontology and the associated ontology of DBpedia. A false positive case is an incorrectly identified ontology rule. A false negative case is a missed ontology rule. Table TABREF61 shows the evaluation results of ontologies extracted from Wikipedia articles pertaining to European Capital Cities (Corpus E), Office Buildings in Chicago (Corpus O) and Birds of the United States (Corpus B) using hrLDA, KB-LDA, phrase_hLDA (tree depth INLINEFORM0 = 3), and LDA+GSHL in contrast to these gold ontologies belonging to DBpedia. The three corpora used in this evaluation were collected from Wikipedia abstracts, the same text source of DBpedia. The seeds of hrLDA and the root concepts of LDA+GSHL are capital, building, and bird. For both KB-LDA and phrase_hLDA we kept the top five tokens in each topic as each node of their topic trees is a distribution/list of phrases. hrLDA achieves the highest precision and F-measure scores in the three experiments compared to the other models. KB-LDA performs better than phrase_hLDA and LDA+GSHL, and phrase_hLDA performs similarly to LDA+GSHL. In general, hrLDA works well especially when the pre-knowledge already exists inside the corpora. Consider the following two statements taken from the corpus on Birds of the United States as an example. In order to use two short documents “The Acadian flycatcher is a small insect-eating bird.\" and “The Pacific loon is a medium-sized member of the loon.\" to infer that the Acadian flycatcher and the Pacific loon are both related to topic bird, the pre-knowledge that “the loon is a species of bird\" is required for hrLDA. This example explains why the accuracy of extracting ontologies from this kind of corpus is low."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2c68b56655d4ba75ae1e9d5b3cf848a295ad61b5",
"3e1185197829dcd2e083bc54e10523575fcd3daf"
],
"answer": [
{
"evidence": [
"Extracting relation triplets is the essential step of hrLDA, and it is also the key process for converting a hierarchical topic tree to an ontology structure. The idea is to find all syntactically related noun phrases and their connections using a language parser such as the Stanford NLP parser BIBREF24 and Ollie BIBREF25 . Generally, there are two types of relation triplets:"
],
"extractive_spans": [],
"free_form_answer": "By extracting syntactically related noun phrases and their connections using a language parser.",
"highlighted_evidence": [
"Extracting relation triplets is the essential step of hrLDA, and it is also the key process for converting a hierarchical topic tree to an ontology structure. The idea is to find all syntactically related noun phrases and their connections using a language parser such as the Stanford NLP parser BIBREF24 and Ollie BIBREF25 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To achieve the first objective, we extract noun phrases and then propose a sampling method to estimate the number of topics. For the second objective, we use language parsing and relation extraction to learn relations for the noun phrases. Regarding the third objective, we adapt and improve the hierarchical latent Dirichlet allocation (hLDA) model BIBREF19 , BIBREF20 . hLDA is not ideal for ontology learning because it builds topics from unigrams (which are not descriptive enough to serve as entities in ontologies) and the topics may contain words from multiple domains when input data have documents from many domains (see Section SECREF2 and Figure FIGREF55 ). Our model, hrLDA, overcomes these deficiencies. In particular, hrLDA represents topics with noun phrases, uses syntax and document structures such as paragraph indentations and item lists, assigns multiple topic paths for every document, and allows topic trees to grow vertically and horizontally.",
"Documents are typically composed of chunks of texts, which may be referred to as sections in Word documents, paragraphs in PDF documents, slides in presentation documents, etc. Each chunk is composed of multiple sentences that are either atomic or complex in structure, which means a document is also a collection of atomic and/or complex sentences. An atomic sentence (see module INLINEFORM0 in Figure FIGREF10 ) is a sentence that contains only one subject ( INLINEFORM1 ), one object ( INLINEFORM2 ) and one verb ( INLINEFORM3 ) between the subject and the object. For every atomic sentence whose object is also a noun phrase, there are at least two relation triplets (e.g., “The tiger that gave the excellent speech is handsome\" has relation triplets: (tiger, give, speech), (speech, be given by, tiger), and (tiger, be, handsome)). By contrast, a complex sentence can be subdivided into multiple atomic sentences. Given that the syntactic verb in a relation triplet is determined by the subject and the object, a document INLINEFORM4 in a corpus INLINEFORM5 can be ultimately reduced to INLINEFORM6 subject phrases (we convert objects to subjects using passive voice) associated with INLINEFORM7 relation triplets INLINEFORM8 . Number INLINEFORM9 is usually larger than the actual number of noun phrases in document INLINEFORM10 . By replacing the unigrams in LDA with relation triplets, we retain definitive information and assign salient noun phrases high weights.",
"Extracting relation triplets is the essential step of hrLDA, and it is also the key process for converting a hierarchical topic tree to an ontology structure. The idea is to find all syntactically related noun phrases and their connections using a language parser such as the Stanford NLP parser BIBREF24 and Ollie BIBREF25 . Generally, there are two types of relation triplets:",
"Subject-predicate-object-based relations,",
"e.g., New York is the largest city in the United States INLINEFORM0 (New York, be the largest city in, the United States);"
],
"extractive_spans": [
" syntax and document structures such as paragraph indentations and item lists, assigns multiple topic paths for every document, and allows topic trees to grow vertically and horizontally.",
". By contrast, a complex sentence can be subdivided into multiple atomic sentences. Given that the syntactic verb in a relation triplet is determined by the subject and the object, a document INLINEFORM4 in a corpus INLINEFORM5 can be ultimately reduced to INLINEFORM6 subject phrases (we convert objects to subjects using passive voice) associated with INLINEFORM7 relation triplets INLINEFORM8",
" The idea is to find all syntactically related noun phrases and their connections using a language parser such as the Stanford NLP parser BIBREF24 and Ollie BIBREF25 . "
],
"free_form_answer": "",
"highlighted_evidence": [
" Our model, hrLDA, overcomes these deficiencies. In particular, hrLDA represents topics with noun phrases, uses syntax and document structures such as paragraph indentations and item lists, assigns multiple topic paths for every document, and allows topic trees to grow vertically and horizontally.",
"Documents are typically composed of chunks of texts, which may be referred to as sections in Word documents, paragraphs in PDF documents, slides in presentation documents, etc. Each chunk is composed of multiple sentences that are either atomic or complex in structure, which means a document is also a collection of atomic and/or complex sentences. An atomic sentence (see module INLINEFORM0 in Figure FIGREF10 ) is a sentence that contains only one subject ( INLINEFORM1 ), one object ( INLINEFORM2 ) and one verb ( INLINEFORM3 ) between the subject and the object. For every atomic sentence whose object is also a noun phrase, there are at least two relation triplets (e.g., “The tiger that gave the excellent speech is handsome\" has relation triplets: (tiger, give, speech), (speech, be given by, tiger), and (tiger, be, handsome)). By contrast, a complex sentence can be subdivided into multiple atomic sentences. Given that the syntactic verb in a relation triplet is determined by the subject and the object, a document INLINEFORM4 in a corpus INLINEFORM5 can be ultimately reduced to INLINEFORM6 subject phrases (we convert objects to subjects using passive voice) associated with INLINEFORM7 relation triplets INLINEFORM8 . Number INLINEFORM9 is usually larger than the actual number of noun phrases in document INLINEFORM10 . By replacing the unigrams in LDA with relation triplets, we retain definitive information and assign salient noun phrases high weights.",
"Extracting relation triplets is the essential step of hrLDA, and it is also the key process for converting a hierarchical topic tree to an ontology structure. The idea is to find all syntactically related noun phrases and their connections using a language parser such as the Stanford NLP parser BIBREF24 and Ollie BIBREF25 . Generally, there are two types of relation triplets:\n\nSubject-predicate-object-based relations,\n\ne.g., New York is the largest city in the United States INLINEFORM0 (New York, be the largest city in, the United States);"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How many domains do they create ontologies for?",
"Do they separately extract topic relations and topic hierarchies in their model?",
"How do they measure the usefulness of obtained ontologies compared to domain expert ones?",
"How do they obtain syntax from raw documents in hrLDA?"
],
"question_id": [
"a8e5e10d13b3f21dd11e8eb58e30cc25efc56e93",
"949a2bc34176e47a4d895bcc3223f2a960f15a81",
"70abb108c3170e81f8725ddc1a3f2357be5a4959",
"ce504a7ee2c1f068ef4dde8d435245b4e77bb0b5"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A representation of a terminological ontology. (Left: topic hierarchies) Topic city is composed of most populous city, capital, London, Berlin, etc. City → capital → London and city → capital → Berlin are two topic paths. (Right: topic relations) Every topic label has relations to itself and/or with other labels. Be the capital city of Germany is one relation/property of topic Berlin. Be on the north of is one relation of topic Berlin to London.",
"Figure 2: Plate notation of rLDA",
"Figure 3: Graphical representation of hrLDA",
"Figure 4: Comparison results of hrLDA, phrase hLDA, hLDA and optimized LDA on perplexity and running time",
"Figure 5: Perplexity trends within 2000 iterations with level = 2",
"Figure 7: Perplexity trends within 2000 iterations with level = 10",
"Figure 6: Perplexity trends within 2000 iterations with level = 6",
"Figure 8: Average perplexities with confidence intervals of the three models in the final 2000th iteration with level = 10",
"Figure 10: A 10-level semiconductor ontology that contains 2063 topics and 6084 relation triplets",
"Figure 9: Performance of hLDA and hrLDA on a toy corpus of diversified topics",
"Table I: Precision, recall and F-measure (%)"
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"7-Figure5-1.png",
"7-Figure7-1.png",
"7-Figure6-1.png",
"8-Figure8-1.png",
"8-Figure10-1.png",
"8-Figure9-1.png",
"9-TableI-1.png"
]
} | [
"How many domains do they create ontologies for?",
"How do they obtain syntax from raw documents in hrLDA?"
] | [
[
"1708.09025-Hierarchy Evaluation-8"
],
[
"1708.09025-Relation Triplet Extraction-1",
"1708.09025-Relation-based Latent Dirichlet Allocation-0",
"1708.09025-Introduction-4",
"1708.09025-Relation Triplet Extraction-0",
"1708.09025-Relation Triplet Extraction-2"
]
] | [
"4",
"By extracting syntactically related noun phrases and their connections using a language parser."
] | 216 |
2004.04478 | Recommendation Chart of Domains for Cross-Domain Sentiment Analysis:Findings of A 20 Domain Study | Cross-domain sentiment analysis (CDSA) helps to address the problem of data scarcity in scenarios where labelled data for a domain (known as the target domain) is unavailable or insufficient. However, the decision to choose a domain (known as the source domain) to leverage from is, at best, intuitive. In this paper, we investigate text similarity metrics to facilitate source domain selection for CDSA. We report results on 20 domains (all possible pairs) using 11 similarity metrics. Specifically, we compare CDSA performance with these metrics for different domain-pairs to enable the selection of a suitable source domain, given a target domain. These metrics include two novel metrics for evaluating domain adaptability to help source domain selection of labelled data and utilize word and sentence-based embeddings as metrics for unlabelled data. The goal of our experiments is a recommendation chart that gives the K best source domains for CDSA for a given target domain. We show that the best K source domains returned by our similarity metrics have a precision of over 50%, for varying values of K. | {
"paragraphs": [
[
"Sentiment analysis (SA) deals with automatic detection of opinion orientation in text BIBREF0. Domain-specificity of sentiment words, and, as a result, sentiment analysis is also a well-known challenge. A popular example being `unpredictable' that is positive for a book review (as in `The plot of the book is unpredictable') but negative for an automobile review (as in `The steering of the car is unpredictable'). Therefore, a classifier that has been trained on book reviews may not perform as well for automobile reviews BIBREF1.",
"However, sufficient datasets may not be available for a domain in which an SA system is to be trained. This has resulted in research in cross-domain sentiment analysis (CDSA). CDSA refers to approaches where the training data is from a different domain (referred to as the `source domain') as compared to that of the test data (referred to as the `target domain'). ben2007analysis show that similarity between the source and target domains can be used as indicators for domain adaptation, in general.",
"In this paper, we validate the idea for CDSA. We use similarity metrics as a basis for source domain selection. We implement an LSTM-based sentiment classifier and evaluate its performance for CDSA for a dataset of reviews from twenty domains. We then compare it with similarity metrics to understand which metrics are useful. The resultant deliverable is a recommendation chart of source domains for cross-domain sentiment analysis.",
"The key contributions of this work are:",
"We compare eleven similarity metrics (four that use labelled data for the target domain, seven that do not use labelled data for the target domain) with the CDSA performance of 20 domains. Out of these eleven metrics, we introduce two new metrics.",
"Based on CDSA results, we create a recommendation chart that prescribes domains that are the best as the source or target domain, for each of the domains.",
"In general, we show which similarity metrics are crucial indicators of the benefit to a target domain, in terms of source domain selection for CDSA.",
"With rising business applications of sentiment analysis, the convenience of cross-domain adaptation of sentiment classifiers is an attractive proposition. We hope that our recommendation chart will be a useful resource for the rapid development of sentiment classifiers for a domain of which a dataset may not be available. Our approach is based on the hypothesis that if source and target domains are similar, their CDSA accuracy should also be higher given all other conditions (such as data size) are the same. The rest of the paper is organized as follows. We describe related work in Section SECREF2 We then introduce our sentiment classifier in Section SECREF3 and the similarity metrics in Section SECREF4 The results are presented in Section SECREF5 followed by a discussion in Section SECREF6 Finally, we conclude the paper in Section SECREF7"
],
[
"Cross-domain adaptation has been reported for several NLP tasks such as part-of-speech tagging BIBREF2, dependency parsing BIBREF3, and named entity recognition BIBREF4. Early work in CDSA is by denecke2009sentiwordnet. They show that lexicons such as SentiWordnet do not perform consistently for sentiment classification of multiple domains. Typical statistical approaches for CDSA use active learning BIBREF5, co-training BIBREF6 or spectral feature alignment BIBREF7. In terms of the use of topic models for CDSA, he2011automatically adapt the joint sentiment tying model by introducing domain-specific sentiment-word priors. Similarly, cross-domain sentiment and topic lexicons have been extracted using automatic methods BIBREF8. glorot2011domain present a method for domain adaptation of sentiment classification that uses deep architectures. Our work differs from theirs in terms of computational intensity (deep architecture) and scale (4 domains only).",
"In this paper, we compare similarity metrics with cross-domain adaptation for the task of sentiment analysis. This has been performed for several other tasks. Recent work by dai2019using uses similarity metrics to select the domain from which pre-trained embeddings should be obtained for named entity recognition. Similarly, schultz2018distance present a method for source domain selection as a weighted sum of similarity metrics. They use statistical classifiers such as logistic regression and support vector machines. However, the similarity measures used are computationally intensive. To the best of our knowledge, this is the first work at this scale that compares different cost-effective similarity metrics with the performance of CDSA."
],
[
"The core of this work is a sentiment classifier for different domains. We use the DRANZIERA benchmark dataset BIBREF9, which consists of Amazon reviews from 20 domains such as automatives, baby products, beauty products, etc. The detailed list can be seen in Table 1. To ensure that the datasets are balanced across all domains, we randomly select 5000 positive and 5000 negative reviews from each domain. The length of the reviews ranges from 5 words to 1654 words across all domains, with an average length ranging from 71 words to 125 words per domain. We point the reader to the original paper for detailed dataset statistics.",
"We normalize the dataset by removing numerical values, punctuations, stop words, and changing all words to the lower case. To train the sentiment classifier, we use an LSTM-based sentiment classifier. It consists of an embedding layer initialized with pre-trained GloVe word embeddings of 100 dimensions. We specify a hidden layer with 128 units and maintain the batch size at 300. We train this model for 20 epochs with a dropout factor of 0.2 and use sigmoid as the activation function. For In-domain sentiment analysis, we report a 5-fold classification accuracy with a train-test split of 8000 and 2000 reviews. In cross-domain set up, we report an average accuracy over 5 splits of 2000 reviews in the target domain in Table TABREF5."
],
[
"In table TABREF6, we present the n-gram percent match among the domain data used in our experiments. We observe that the n-gram match from among this corpora is relatively low and simple corpus similarity measures which use orthographic techniques cannot be used to obtain domain similarity. Hence, we propose the use of the metrics detailed below to perform our experiments.",
"We use a total of 11 metrics over two scenarios: the first that uses labelled data, while the second that uses unlabelled data.",
"Labelled Data: Here, each review in the target domain data is labelled either positive or negative, and a number of such labelled reviews are insufficient in size for training an efficient model.",
"Unlabelled Data: Here, positive and negative labels are absent from the target domain data, and the number of such reviews may or may not be sufficient in number.",
"We explain all our metrics in detail later in this section. These 11 metrics can also be classified into two categories:",
"Symmetric Metrics - The metrics which consider domain-pairs $(D_1,D_2)$ and $(D_2,D_1)$ as the same and provide similar results for them viz. Significant Words Overlap, Chameleon Words Similarity, Symmetric KL Divergence, Word2Vec embeddings, GloVe embeddings, FastText word embeddings, ELMo based embeddings and Universal Sentence Encoder based embeddings.",
"Asymmetric Metrics - The metrics which are 2-way in nature i.e., $(D_1,D_2)$ and $(D_2,D_1)$ have different similarity values viz. Entropy Change, Doc2Vec embeddings, and FastText sentence embeddings. These metrics offer additional advantage as they can help decide which domain to train from and which domain to test on amongst $D_1$ and $D_2$."
],
[
"Training models for prediction of sentiment can cost one both valuable time and resources. The availability of pre-trained models is cost-effective in terms of both time and resources. One can always train new models and test for each source domain since labels are present for the source domain data. However, it is feasible only when trained classification models are available for all source domains. If pre-trained models are unavailable, training for each source domain can be highly intensive both in terms of time and resources. This makes it important to devise easy-to-compute metrics that use labelled data in the source and target domains.",
"When target domain data is labelled, we use the following four metrics for comparing and ranking source domains for a particular target domain:"
],
[
"All words in a domain are not significant for sentiment expression. For example, comfortable is significant in the `Clothing' domain but not as significant in the `Movie' domain. In this metric, we build upon existing work by sharma2018identifying and extract significant words from each domain using the $\\chi ^2$ test. This method relies on computing the statistical significance of a word based on the polarity of that word in the domain. For our experiments, we consider only the words which appear at least 10 times in the corpus and have a $\\chi ^2$ value greater than or equal to 1. The $\\chi ^2$ value is calculated as follows:",
"Where ${c_p}^w$ and ${c_n}^w$ are the observed counts of word $w$ in positive and negative reviews, respectively. $\\mu ^w$ is the expected count, which is kept as half of the total number of occurrences of $w$ in the corpus. We hypothesize that, if a domain-pair $(D_1,D_2)$ shares a larger number of significant words than the pair $(D_1,D_3)$, then $D_1$ is closer to $D_2$ as compared to $D_3$, since they use relatively higher number of similar words for sentiment expression. For every target domain, we compute the intersection of significant words with all other domains and rank them on the basis of intersection count. The utility of this metric is that it can also be used in a scenario where target domain data is unlabelled, but source domain data is labelled. It is due to the fact that once we obtain significant words in the source domain, we just need to search for them in the target domain to find out common significant words."
],
[
"KL Divergence can be used to compare the probabilistic distribution of polar words in two domains BIBREF10. A lower KL Divergence score indicates that the probabilistic distribution of polar words in two domains is identical. This implies that the domains are close to each other, in terms of sentiment similarity. Therefore, to rank source domains for a target domain using this metric, we inherit the concept of symmetric KL Divergence proposed by murthy2018judicious and use it to compute average Symmetric KL-Divergence of common polar words shared by a domain-pair. We label a word as `polar' for a domain if,",
"where $P$ is the probability of a word appearing in a review which is labelled positive and $N$ is the probability of a word appearing in a review which is labelled negative.",
"SKLD of a polar word for domain-pair $(D_1,D_2)$ is calculated as:",
"where $P_i$ and $N_i$ are probabilities of a word appearing under positively labelled and negatively labelled reviews, respectively, in domain $i$. We then take an average of all common polar words.",
"We observe that, on its own, this metric performs rather poorly. Upon careful analysis of results, we concluded that the imbalance in the number of polar words being shared across domain-pairs is a reason for poor performance. To mitigate this, we compute a confidence term for a domain-pair $(D_1,D_2)$ using the Jaccard Similarity Coefficient which is calculated as follows:",
"where $C$ is the number of common polar words and $W_1$ and $W_2$ are number of polar words in $D_1$ and $D_2$ respectively. The intuition behind this being that the domain-pairs having higher percentage of polar words overlap should be ranked higher compared to those having relatively higher number of polar words. For example, we prefer $(C:40,W_1 :50,W_2 :50)$ over $(C:200,W_1 :500,W_2 :500)$ even though 200 is greater than 40. To compute the final similarity value, we add the reciprocal of $J$ to the SKLD value since a larger value of $J$ will add a smaller fraction to SLKD value. For a smaller SKLD value, the domains would be relatively more similar. This is computed as follows:",
"Domain pairs are ranked in increasing order of this similarity value. After the introduction of the confidence term, a significant improvement in the results is observed."
],
[
"This metric is our novel contribution for domain adaptability evaluation. It helps in detection of `Chameleon Word(s)' which change their polarity across domains BIBREF11. The motivation comes from the fact that chameleon words directly affect the CDSA accuracy. For example, poignant is positive in movie domain whereas negative in many other domains viz. Beauty, Clothing etc.",
"For every common polar word between two domains, $L_1 \\ Distance$ between two vectors $[P_1,N_1]$ and $[P_2,N_2]$ is calculated as;",
"The overall distance is an average overall common polar words. Similar to SKLD, the confidence term based on Jaccard Similarity Coefficient is used to counter the imbalance of common polar word count between domain-pairs.",
"Domain pairs are ranked in increasing order of final value."
],
[
"Entropy is the degree of randomness. A relatively lower change in entropy, when two domains are concatenated, indicates that the two domains contain similar topics and are therefore closer to each other. This metric is also our novel contribution. Using this metric, we calculate the percentage change in the entropy when the target domain is concatenated with the source domain. We calculate the entropy as the combination of entropy for unigrams, bigrams, trigrams, and quadrigrams. We consider only polar words for unigrams. For bi, tri and quadrigrams, we give priority to polar words by using a weighted entropy function and this weighted entropy $E$ is calculated as:",
"Here, $X$ is the set of n-grams that contain at least one polar word, $Y$ is the set of n-grams which do not contain any polar word, and $w$ is the weight. For our experiments, we keep the value of $w$ as 1 for unigrams and 5 for bi, tri, and quadrigrams.",
"We then say that a source domain $D_2$ is more suitable for target domain $D_1$ as compared to source domain $D_3$ if;",
"where $D_2+D_1$ indicates combined data obtained by mixing $D_1$ in $D_2$ and $\\Delta E$ indicates percentage change in entropy before and after mixing of source and target domains.",
"Note that this metric offers the advantage of asymmetricity, unlike the other three metrics for labelled data."
],
[
"For unlabelled target domain data, we utilize word and sentence embeddings-based similarity as a metric and use various embedding models. To train word embedding based models, we use Word2Vec BIBREF12, GloVe BIBREF13, FastText BIBREF14, and ELMo BIBREF15. We also exploit sentence vectors from models trained using Doc2Vec BIBREF16, FastText, and Universal Sentence Encoder BIBREF17. In addition to using plain sentence vectors, we account for sentiment in sentences using SentiWordnet BIBREF18, where each review is given a sentiment score by taking harmonic mean over scores (obtained from SentiWordnet) of words in a review."
],
[
"We train SKIPGRAM models on all the domains to obtain word embeddings. We build models with 50 dimensions where the context window is chosen to be 5. For each domain pair, we then compare embeddings of common adjectives in both the domains by calculating Angular Similarity BIBREF17. It was observed that cosine similarity values were very close to each other, making it difficult to clearly separate domains. Since Angular Similarity distinguishes nearly parallel vectors much better, we use it instead of Cosine Similarity. We obtain a similarity value by averaging over all common adjectives. For the final similarity value of this metric, we use Jaccard Similarity Coefficient here as well:",
"For a target domain, source domains are ranked in decreasing order of final similarity value."
],
[
"Doc2Vec represents each sentence by a dense vector which is trained to predict words in the sentence, given the model. It tries to overcome the weaknesses of the bag-of-words model. Similar to Word2Vec, we train Doc2Vec models on each domain to extract sentence vectors. We train the models over 100 epochs for 100 dimensions, where the learning rate is 0.025. Since we can no longer leverage adjectives for sentiment, we use SentiWordnet for assigning sentiment scores (ranging from -1 to +1 where -1 denotes a negative sentiment, and +1 denotes a positive sentiment) to reviews (as detailed above) and select reviews which have a score above a certain threshold. We have empirically arrived at $\\pm 0.01$ as the threshold value. Any review with a score outside this window is selected. We also restrict the length of reviews to a maximum of 100 words to reduce sparsity.",
"After filtering out reviews with sentiment score less than the threshold value, we are left with a minimum of 8000 reviews per domain. We train on 7500 reviews form each domain and test on 500 reviews. To compare a domain-pair $(D_1,D_2)$ where $D_1$ is the source domain and $D_2$ is the target domain, we compute Angular Similarity between two vectors $V_1$ and $V_2$. $V_1$ is obtained by taking an average over 500 test vectors (from $D_1$) inferred from the model trained on $D_1$. $V_2$ is obtained in a similar manner, except that the test data is from $D_2$. Figure FIGREF30 shows the experimental setup for this metric."
],
[
"Both Word2Vec and GloVe learn vector representations of words from their co-occurrence information. However, GloVe is different in the sense that it is a count-based model. In this metric, we use GloVe embeddings for adjectives shared by domain-pairs. We train GloVe models for each domain over 50 epochs, for 50 dimensions with a learning rate of 0.05. For computing similarity of a domain-pair, we follow the same procedure as described under the Word2Vec metric. The final similarity value is obtained using equation (DISPLAY_FORM29)."
],
[
"We train monolingual word embeddings-based models for each domain using the FastText library. We train these models with 100 dimensions and 0.1 as the learning rate. The size of the context window is limited to 5 since FastText also uses sub-word information. Our model takes into account character n-grams from 3 to 6 characters, and we train our model over 5 epochs. We use the default loss function (softmax) for training.",
"We devise two different metrics out of FastText models to calculate the similarity between domain-pairs. In the first metric (ULM4), we compute the Angular Similarity between the word vectors for all the common adjectives, and for each domain pair just like Word2Vec and GloVe. Overall, similarity for a domain pair is calculated using equation (DISPLAY_FORM29). As an additional metric (ULM5), we extract sentence vectors for reviews and follow a procedure similar to Doc2Vec. SentiWordnet is used to filter out train and test data using the same threshold window of $\\pm 0.01$."
],
[
"We use the pre-trained deep contextualized word representation model provided by the ELMo library. Unlike Word2Vec, GloVe, and FastText, ELMo gives multiple embeddings for a word based on different contexts it appears in the corpus.",
"In ELMo, higher-level LSTM states capture the context-dependent aspects of word meaning. Therefore, we use only the topmost layer for word embeddings with 1024 dimensions. Multiple contextual embeddings of a word are averaged to obtain a single vector. We again use average Angular Similarity of word embeddings for common adjectives to compare domain-pairs along with Jaccard Similarity Coefficient. The final similarity value is obtained using equation (DISPLAY_FORM29)."
],
[
"One of the most recent contributions to the area of sentence embeddings is the Universal Sentence Encoder. Its transformer-based sentence encoding model constructs sentence embeddings using the encoding sub-graph of the transformer architecture BIBREF19. We leverage these embeddings and devise a metric for our work.",
"We extract sentence vectors of reviews in each domain using tensorflow-hub model toolkit. The dimensions of each vector are 512. To find out the similarity between a domain-pair, we extract top 500 reviews from both domains based on the sentiment score acquired using SentiWordnet (as detailed above) and average over them to get two vectors with 512 dimensions each. After that, we find out the Angular Similarity between these vectors to rank all source domains for a particular target domain in decreasing order of similarity."
],
[
"We show the results of the classifier's CDSA performance followed by metrics evaluation on the top 10 domains. Finally, we present an overall comparison of metrics for all the domains.",
"Table TABREF31 shows the average CDSA accuracy degradation in each domain when it is selected as the source domain, and the rest of the domains are selected as the target domain. We also show in-domain sentiment analysis accuracy, the best source domain (on which CDSA classifier is trained), and the best target domain (on which CDSA classifier is tested) in the table. D15 suffers from the maximum average accuracy degradation, and D18 performs the best with least average accuracy degradation, which is also supported by its number of appearances i.e., 4, as the best source domain in the table. As for the best target domain, D9 appears the maximum number of times.",
"To compare metrics, we use two parameters: Precision and Ranking Accuracy.",
"Precision: It is the intersection between the top-K source domains predicted by the metric and top-K source domains as per CDSA accuracy, for a particular target domain. In other words, it is the number of true positives.",
"Ranking Accuracy (RA): It is the number of predicted source domains that are ranked correctly by the metric.",
"Figure FIGREF36 shows the number of true positives (precision) when K = 5 for each metric over the top 10 domains. The X-axis denotes the domains, whereas the Y-axis in the bar graph indicates the precision achieved by all metrics in each domain. We observe that the highest precision attained is 5, by 4 different metrics. We also observe that all the metrics reach a precision of at least 1. A similar observation is made for the remaining domains as well. Figure FIGREF37 displays the RA values of K = 5 in each metric for the top 10 domains. Here, the highest number of correct source domain rankings attained is 4 by ULM6 (ELMo) for domain D5.",
"Table TABREF33 shows results for different values of K in terms of precision percentage and normalized RA (NRA) over all domains. Normalized RA is RA scaled between 0 to 1. For example, entries 45.00 and 0.200 indicate that there is 45% precision with NRA of 0.200 for the top 3 source domains.",
"These are the values when the metric LM1 (Significant Words Overlap) is used to predict the top 3 source domains for all target domains. Best figures for precision and NRA have been shown in bold for all values of K in both labelled as well as unlabelled data metrics. ULM7 (Universal Sentence Encoder) outperforms all other metrics in terms of both precision and NRA for K = 3, 5, and 7. When K = 10, however, ULM6 (ELMo) outperforms ULM7 marginally at the cost of a 0.02 degradation in terms of NRA. For K = 3 and 5, ULM2 (Doc2Vec) has the least precision percentage and NRA, but UML3 (GloVe) and ULM5 (FastText Sentence) take the lowest pedestal for K = 7 and K = 10 respectively, in terms of precision percentage."
],
[
"Table TABREF31 shows that, if a suitable source domain is not selected, CDSA accuracy takes a hit. The degradation suffered is as high as 23.18%. This highlights the motivation of these experiments: the choice of a source domain is critical. We also observe that the automative domain (D2) is the best source domain for clothing (D6), both being unrelated domains in terms of the products they discuss. This holds for many other domain pairs, implying that mere intuition is not enough for source domain selection.",
"From the results, we observe that LM4, which is one of our novel metrics, predicts the best source domain correctly for $D_2$ and $D_4$, which all other metrics fail to do. This is a good point to highlight the fact that this metric captures features missed by other metrics. Also, it gives the best RA for K=3 and 10. Additionally, it offers the advantage of asymmetricity unlike other metrics for labelled data.",
"For labelled data, we observe that LM2 (Symmetric KL-Divergence) and LM3 (Chameleon Words Similarity) perform better than other metrics. Interestingly, they also perform identically for K = 3 and K = 5 in terms of both precision percentage and NRA. We accredit this observation to the fact that both determine the distance between probabilistic distributions of polar words in domain-pairs.",
"Amongst the metrics which utilize word embeddings, ULM1 (Word2Vec) outperforms all other metrics for all values of K. We also observe that word embeddings-based metrics perform better than sentence embeddings-based metrics. Although ULM6 and ULM7 outperform every other metric, we would like to make a note that these are computationally intensive models. Therefore, there is a trade-off between the performance and time when a metric is to be chosen for source domain selection. The reported NRA is low for all the values of K across all metrics. We believe that the reason for this is the unavailability of enough data for the metrics to provide a clear distinction among the source domains. If a considerably larger amount of data would be used, the NRA should improve.",
"We suspect that the use of ELMo and Universal Sentence Encoder to train models for contextualized embeddings on review data in individual domains should improve the precision for ULM6 (ELMo) and ULM7 (Universal Sentence Encoder). However, we cannot say the same for RA as the amount of corpora used for pre-trained models is considerably large. Unfortunately, training models using both these recur a high cost, both computationally and with respect to time, which defeats the very purpose of our work i.e., to pre-determine best source domain for CDSA using non-intensive text similarity-based metrics."
],
[
"In this paper, we investigate how text similarity-based metrics facilitate the selection of a suitable source domain for CDSA. Based on a dataset of reviews in 20 domains, our recommendation chart that shows the best source and target domain pairs for CDSA would be useful for deployments of sentiment classifiers for these domains.",
"In order to compare the benefit of a domain with similarity metrics between the source and target domains, we describe a set of symmetric and asymmetric similarity metrics. These also include two novel metrics to evaluate domain adaptability: namely as LM3 (Chameleon Words Similarity) and LM4 (Entropy Change). These metrics perform at par with the metrics that use previously proposed methods. We observe that, amongst word embedding-based metrics, ULM6 (ELMo) performs the best, and amongst sentence embedding-based metrics, ULM7 (Universal Sentence Encoder) is the clear winner. We discuss various metrics, their results and provide a set of recommendations to the problem of source domain selection for CDSA.",
"A possible future work is to use a weighted combination of multiple metrics for source domain selection. These similarity metrics may be used to extract suitable data or features for efficient CDSA. Similarity metrics may also be used as features to predict the CDSA performance in terms of accuracy degradation."
]
],
"section_name": [
"Introduction",
"Related Work",
"Sentiment Classifier",
"Similarity Metrics",
"Similarity Metrics ::: Metrics: Labelled Data",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM1: Significant Words Overlap",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM2: Symmetric KL-Divergence (SKLD)",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM3: Chameleon Words Similarity",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM4: Entropy Change",
"Similarity Metrics ::: Metrics: Unlabelled Data",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM1: Word2Vec",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM2: Doc2Vec",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM3: GloVe",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM4 and ULM5: FastText",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM6: ELMo",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM7: Universal Sentence Encoder",
"Results",
"Discussion",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"076567bb5ea418470cf53d839bfa67475c2ad28a",
"eabb9df8d938a1f6e5a227ecac29048b6554ac6d"
],
"answer": [
{
"evidence": [
"The core of this work is a sentiment classifier for different domains. We use the DRANZIERA benchmark dataset BIBREF9, which consists of Amazon reviews from 20 domains such as automatives, baby products, beauty products, etc. The detailed list can be seen in Table 1. To ensure that the datasets are balanced across all domains, we randomly select 5000 positive and 5000 negative reviews from each domain. The length of the reviews ranges from 5 words to 1654 words across all domains, with an average length ranging from 71 words to 125 words per domain. We point the reader to the original paper for detailed dataset statistics."
],
"extractive_spans": [
"DRANZIERA benchmark dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"The core of this work is a sentiment classifier for different domains. We use the DRANZIERA benchmark dataset BIBREF9, which consists of Amazon reviews from 20 domains such as automatives, baby products, beauty products, etc. The detailed list can be seen in Table 1. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The core of this work is a sentiment classifier for different domains. We use the DRANZIERA benchmark dataset BIBREF9, which consists of Amazon reviews from 20 domains such as automatives, baby products, beauty products, etc. The detailed list can be seen in Table 1. To ensure that the datasets are balanced across all domains, we randomly select 5000 positive and 5000 negative reviews from each domain. The length of the reviews ranges from 5 words to 1654 words across all domains, with an average length ranging from 71 words to 125 words per domain. We point the reader to the original paper for detailed dataset statistics."
],
"extractive_spans": [
"DRANZIERA "
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the DRANZIERA benchmark dataset BIBREF9, which consists of Amazon reviews from 20 domains such as automatives, baby products, beauty products, etc. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"a0e0dc3441f122459f4ba6a310d56881edcb3843",
"ea3b68bedc4497eb6cc1708ce8a90abd458449ae"
],
"answer": [
{
"evidence": [
"We devise two different metrics out of FastText models to calculate the similarity between domain-pairs. In the first metric (ULM4), we compute the Angular Similarity between the word vectors for all the common adjectives, and for each domain pair just like Word2Vec and GloVe. Overall, similarity for a domain pair is calculated using equation (DISPLAY_FORM29). As an additional metric (ULM5), we extract sentence vectors for reviews and follow a procedure similar to Doc2Vec. SentiWordnet is used to filter out train and test data using the same threshold window of $\\pm 0.01$."
],
"extractive_spans": [
"ULM4",
"ULM5"
],
"free_form_answer": "",
"highlighted_evidence": [
"We devise two different metrics out of FastText models to calculate the similarity between domain-pairs. In the first metric (ULM4), we compute the Angular Similarity between the word vectors for all the common adjectives, and for each domain pair just like Word2Vec and GloVe. Overall, similarity for a domain pair is calculated using equation (DISPLAY_FORM29). As an additional metric (ULM5), we extract sentence vectors for reviews and follow a procedure similar to Doc2Vec. SentiWordnet is used to filter out train and test data using the same threshold window of $\\pm 0.01$."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to compare the benefit of a domain with similarity metrics between the source and target domains, we describe a set of symmetric and asymmetric similarity metrics. These also include two novel metrics to evaluate domain adaptability: namely as LM3 (Chameleon Words Similarity) and LM4 (Entropy Change). These metrics perform at par with the metrics that use previously proposed methods. We observe that, amongst word embedding-based metrics, ULM6 (ELMo) performs the best, and amongst sentence embedding-based metrics, ULM7 (Universal Sentence Encoder) is the clear winner. We discuss various metrics, their results and provide a set of recommendations to the problem of source domain selection for CDSA."
],
"extractive_spans": [
"LM3 (Chameleon Words Similarity) and LM4 (Entropy Change)"
],
"free_form_answer": "",
"highlighted_evidence": [
"w",
"These also include two novel metrics to evaluate domain adaptability: namely as LM3 (Chameleon Words Similarity) and LM4 (Entropy Change). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"01999084082d5e7a3462f3556e97b922e1c747f1",
"4e6afbe5ece470df758b0251799b7b3e0cf5256b"
],
"answer": [
{
"evidence": [
"When target domain data is labelled, we use the following four metrics for comparing and ranking source domains for a particular target domain:",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM1: Significant Words Overlap",
"All words in a domain are not significant for sentiment expression. For example, comfortable is significant in the `Clothing' domain but not as significant in the `Movie' domain. In this metric, we build upon existing work by sharma2018identifying and extract significant words from each domain using the $\\chi ^2$ test. This method relies on computing the statistical significance of a word based on the polarity of that word in the domain. For our experiments, we consider only the words which appear at least 10 times in the corpus and have a $\\chi ^2$ value greater than or equal to 1. The $\\chi ^2$ value is calculated as follows:",
"Where ${c_p}^w$ and ${c_n}^w$ are the observed counts of word $w$ in positive and negative reviews, respectively. $\\mu ^w$ is the expected count, which is kept as half of the total number of occurrences of $w$ in the corpus. We hypothesize that, if a domain-pair $(D_1,D_2)$ shares a larger number of significant words than the pair $(D_1,D_3)$, then $D_1$ is closer to $D_2$ as compared to $D_3$, since they use relatively higher number of similar words for sentiment expression. For every target domain, we compute the intersection of significant words with all other domains and rank them on the basis of intersection count. The utility of this metric is that it can also be used in a scenario where target domain data is unlabelled, but source domain data is labelled. It is due to the fact that once we obtain significant words in the source domain, we just need to search for them in the target domain to find out common significant words.",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM2: Symmetric KL-Divergence (SKLD)",
"KL Divergence can be used to compare the probabilistic distribution of polar words in two domains BIBREF10. A lower KL Divergence score indicates that the probabilistic distribution of polar words in two domains is identical. This implies that the domains are close to each other, in terms of sentiment similarity. Therefore, to rank source domains for a target domain using this metric, we inherit the concept of symmetric KL Divergence proposed by murthy2018judicious and use it to compute average Symmetric KL-Divergence of common polar words shared by a domain-pair. We label a word as `polar' for a domain if,",
"where $P$ is the probability of a word appearing in a review which is labelled positive and $N$ is the probability of a word appearing in a review which is labelled negative.",
"SKLD of a polar word for domain-pair $(D_1,D_2)$ is calculated as:",
"where $P_i$ and $N_i$ are probabilities of a word appearing under positively labelled and negatively labelled reviews, respectively, in domain $i$. We then take an average of all common polar words.",
"We observe that, on its own, this metric performs rather poorly. Upon careful analysis of results, we concluded that the imbalance in the number of polar words being shared across domain-pairs is a reason for poor performance. To mitigate this, we compute a confidence term for a domain-pair $(D_1,D_2)$ using the Jaccard Similarity Coefficient which is calculated as follows:",
"where $C$ is the number of common polar words and $W_1$ and $W_2$ are number of polar words in $D_1$ and $D_2$ respectively. The intuition behind this being that the domain-pairs having higher percentage of polar words overlap should be ranked higher compared to those having relatively higher number of polar words. For example, we prefer $(C:40,W_1 :50,W_2 :50)$ over $(C:200,W_1 :500,W_2 :500)$ even though 200 is greater than 40. To compute the final similarity value, we add the reciprocal of $J$ to the SKLD value since a larger value of $J$ will add a smaller fraction to SLKD value. For a smaller SKLD value, the domains would be relatively more similar. This is computed as follows:",
"Domain pairs are ranked in increasing order of this similarity value. After the introduction of the confidence term, a significant improvement in the results is observed.",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM3: Chameleon Words Similarity",
"This metric is our novel contribution for domain adaptability evaluation. It helps in detection of `Chameleon Word(s)' which change their polarity across domains BIBREF11. The motivation comes from the fact that chameleon words directly affect the CDSA accuracy. For example, poignant is positive in movie domain whereas negative in many other domains viz. Beauty, Clothing etc.",
"For every common polar word between two domains, $L_1 \\ Distance$ between two vectors $[P_1,N_1]$ and $[P_2,N_2]$ is calculated as;",
"The overall distance is an average overall common polar words. Similar to SKLD, the confidence term based on Jaccard Similarity Coefficient is used to counter the imbalance of common polar word count between domain-pairs.",
"Domain pairs are ranked in increasing order of final value.",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM4: Entropy Change",
"Entropy is the degree of randomness. A relatively lower change in entropy, when two domains are concatenated, indicates that the two domains contain similar topics and are therefore closer to each other. This metric is also our novel contribution. Using this metric, we calculate the percentage change in the entropy when the target domain is concatenated with the source domain. We calculate the entropy as the combination of entropy for unigrams, bigrams, trigrams, and quadrigrams. We consider only polar words for unigrams. For bi, tri and quadrigrams, we give priority to polar words by using a weighted entropy function and this weighted entropy $E$ is calculated as:",
"Here, $X$ is the set of n-grams that contain at least one polar word, $Y$ is the set of n-grams which do not contain any polar word, and $w$ is the weight. For our experiments, we keep the value of $w$ as 1 for unigrams and 5 for bi, tri, and quadrigrams.",
"We then say that a source domain $D_2$ is more suitable for target domain $D_1$ as compared to source domain $D_3$ if;",
"where $D_2+D_1$ indicates combined data obtained by mixing $D_1$ in $D_2$ and $\\Delta E$ indicates percentage change in entropy before and after mixing of source and target domains.",
"Note that this metric offers the advantage of asymmetricity, unlike the other three metrics for labelled data.",
"For unlabelled target domain data, we utilize word and sentence embeddings-based similarity as a metric and use various embedding models. To train word embedding based models, we use Word2Vec BIBREF12, GloVe BIBREF13, FastText BIBREF14, and ELMo BIBREF15. We also exploit sentence vectors from models trained using Doc2Vec BIBREF16, FastText, and Universal Sentence Encoder BIBREF17. In addition to using plain sentence vectors, we account for sentiment in sentences using SentiWordnet BIBREF18, where each review is given a sentiment score by taking harmonic mean over scores (obtained from SentiWordnet) of words in a review.",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM1: Word2Vec",
"We train SKIPGRAM models on all the domains to obtain word embeddings. We build models with 50 dimensions where the context window is chosen to be 5. For each domain pair, we then compare embeddings of common adjectives in both the domains by calculating Angular Similarity BIBREF17. It was observed that cosine similarity values were very close to each other, making it difficult to clearly separate domains. Since Angular Similarity distinguishes nearly parallel vectors much better, we use it instead of Cosine Similarity. We obtain a similarity value by averaging over all common adjectives. For the final similarity value of this metric, we use Jaccard Similarity Coefficient here as well:",
"For a target domain, source domains are ranked in decreasing order of final similarity value.",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM2: Doc2Vec",
"Doc2Vec represents each sentence by a dense vector which is trained to predict words in the sentence, given the model. It tries to overcome the weaknesses of the bag-of-words model. Similar to Word2Vec, we train Doc2Vec models on each domain to extract sentence vectors. We train the models over 100 epochs for 100 dimensions, where the learning rate is 0.025. Since we can no longer leverage adjectives for sentiment, we use SentiWordnet for assigning sentiment scores (ranging from -1 to +1 where -1 denotes a negative sentiment, and +1 denotes a positive sentiment) to reviews (as detailed above) and select reviews which have a score above a certain threshold. We have empirically arrived at $\\pm 0.01$ as the threshold value. Any review with a score outside this window is selected. We also restrict the length of reviews to a maximum of 100 words to reduce sparsity.",
"After filtering out reviews with sentiment score less than the threshold value, we are left with a minimum of 8000 reviews per domain. We train on 7500 reviews form each domain and test on 500 reviews. To compare a domain-pair $(D_1,D_2)$ where $D_1$ is the source domain and $D_2$ is the target domain, we compute Angular Similarity between two vectors $V_1$ and $V_2$. $V_1$ is obtained by taking an average over 500 test vectors (from $D_1$) inferred from the model trained on $D_1$. $V_2$ is obtained in a similar manner, except that the test data is from $D_2$. Figure FIGREF30 shows the experimental setup for this metric.",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM3: GloVe",
"Both Word2Vec and GloVe learn vector representations of words from their co-occurrence information. However, GloVe is different in the sense that it is a count-based model. In this metric, we use GloVe embeddings for adjectives shared by domain-pairs. We train GloVe models for each domain over 50 epochs, for 50 dimensions with a learning rate of 0.05. For computing similarity of a domain-pair, we follow the same procedure as described under the Word2Vec metric. The final similarity value is obtained using equation (DISPLAY_FORM29).",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM4 and ULM5: FastText",
"We train monolingual word embeddings-based models for each domain using the FastText library. We train these models with 100 dimensions and 0.1 as the learning rate. The size of the context window is limited to 5 since FastText also uses sub-word information. Our model takes into account character n-grams from 3 to 6 characters, and we train our model over 5 epochs. We use the default loss function (softmax) for training.",
"We devise two different metrics out of FastText models to calculate the similarity between domain-pairs. In the first metric (ULM4), we compute the Angular Similarity between the word vectors for all the common adjectives, and for each domain pair just like Word2Vec and GloVe. Overall, similarity for a domain pair is calculated using equation (DISPLAY_FORM29). As an additional metric (ULM5), we extract sentence vectors for reviews and follow a procedure similar to Doc2Vec. SentiWordnet is used to filter out train and test data using the same threshold window of $\\pm 0.01$.",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM6: ELMo",
"We use the pre-trained deep contextualized word representation model provided by the ELMo library. Unlike Word2Vec, GloVe, and FastText, ELMo gives multiple embeddings for a word based on different contexts it appears in the corpus.",
"In ELMo, higher-level LSTM states capture the context-dependent aspects of word meaning. Therefore, we use only the topmost layer for word embeddings with 1024 dimensions. Multiple contextual embeddings of a word are averaged to obtain a single vector. We again use average Angular Similarity of word embeddings for common adjectives to compare domain-pairs along with Jaccard Similarity Coefficient. The final similarity value is obtained using equation (DISPLAY_FORM29).",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM7: Universal Sentence Encoder",
"One of the most recent contributions to the area of sentence embeddings is the Universal Sentence Encoder. Its transformer-based sentence encoding model constructs sentence embeddings using the encoding sub-graph of the transformer architecture BIBREF19. We leverage these embeddings and devise a metric for our work.",
"We extract sentence vectors of reviews in each domain using tensorflow-hub model toolkit. The dimensions of each vector are 512. To find out the similarity between a domain-pair, we extract top 500 reviews from both domains based on the sentiment score acquired using SentiWordnet (as detailed above) and average over them to get two vectors with 512 dimensions each. After that, we find out the Angular Similarity between these vectors to rank all source domains for a particular target domain in decreasing order of similarity."
],
"extractive_spans": [
"LM1: Significant Words Overlap",
" LM2: Symmetric KL-Divergence (SKLD)",
"LM3: Chameleon Words Similarity",
"LM4: Entropy Change",
" ULM1: Word2Vec",
"ULM2: Doc2Vec",
"ULM3: GloVe",
"ULM4 and ULM5: FastText",
"ULM6: ELMo",
"ULM7: Universal Sentence Encoder"
],
"free_form_answer": "",
"highlighted_evidence": [
"When target domain data is labelled, we use the following four metrics for comparing and ranking source domains for a particular target domain:\n\nSimilarity Metrics ::: Metrics: Labelled Data ::: LM1: Significant Words Overlap\nAll words in a domain are not significant for sentiment expression. For example, comfortable is significant in the `Clothing' domain but not as significant in the `Movie' domain. In this metric, we build upon existing work by sharma2018identifying and extract significant words from each domain using the $\\chi ^2$ test. This method relies on computing the statistical significance of a word based on the polarity of that word in the domain. For our experiments, we consider only the words which appear at least 10 times in the corpus and have a $\\chi ^2$ value greater than or equal to 1. The $\\chi ^2$ value is calculated as follows:\n\nWhere ${c_p}^w$ and ${c_n}^w$ are the observed counts of word $w$ in positive and negative reviews, respectively. $\\mu ^w$ is the expected count, which is kept as half of the total number of occurrences of $w$ in the corpus. We hypothesize that, if a domain-pair $(D_1,D_2)$ shares a larger number of significant words than the pair $(D_1,D_3)$, then $D_1$ is closer to $D_2$ as compared to $D_3$, since they use relatively higher number of similar words for sentiment expression. For every target domain, we compute the intersection of significant words with all other domains and rank them on the basis of intersection count. The utility of this metric is that it can also be used in a scenario where target domain data is unlabelled, but source domain data is labelled. It is due to the fact that once we obtain significant words in the source domain, we just need to search for them in the target domain to find out common significant words.\n\nSimilarity Metrics ::: Metrics: Labelled Data ::: LM2: Symmetric KL-Divergence (SKLD)\nKL Divergence can be used to compare the probabilistic distribution of polar words in two domains BIBREF10. A lower KL Divergence score indicates that the probabilistic distribution of polar words in two domains is identical. This implies that the domains are close to each other, in terms of sentiment similarity. Therefore, to rank source domains for a target domain using this metric, we inherit the concept of symmetric KL Divergence proposed by murthy2018judicious and use it to compute average Symmetric KL-Divergence of common polar words shared by a domain-pair. We label a word as `polar' for a domain if,\n\nwhere $P$ is the probability of a word appearing in a review which is labelled positive and $N$ is the probability of a word appearing in a review which is labelled negative.\n\nSKLD of a polar word for domain-pair $(D_1,D_2)$ is calculated as:\n\nwhere $P_i$ and $N_i$ are probabilities of a word appearing under positively labelled and negatively labelled reviews, respectively, in domain $i$. We then take an average of all common polar words.\n\nWe observe that, on its own, this metric performs rather poorly. Upon careful analysis of results, we concluded that the imbalance in the number of polar words being shared across domain-pairs is a reason for poor performance. To mitigate this, we compute a confidence term for a domain-pair $(D_1,D_2)$ using the Jaccard Similarity Coefficient which is calculated as follows:\n\nwhere $C$ is the number of common polar words and $W_1$ and $W_2$ are number of polar words in $D_1$ and $D_2$ respectively. The intuition behind this being that the domain-pairs having higher percentage of polar words overlap should be ranked higher compared to those having relatively higher number of polar words. For example, we prefer $(C:40,W_1 :50,W_2 :50)$ over $(C:200,W_1 :500,W_2 :500)$ even though 200 is greater than 40. To compute the final similarity value, we add the reciprocal of $J$ to the SKLD value since a larger value of $J$ will add a smaller fraction to SLKD value. For a smaller SKLD value, the domains would be relatively more similar. This is computed as follows:\n\nDomain pairs are ranked in increasing order of this similarity value. After the introduction of the confidence term, a significant improvement in the results is observed.\n\nSimilarity Metrics ::: Metrics: Labelled Data ::: LM3: Chameleon Words Similarity\nThis metric is our novel contribution for domain adaptability evaluation. It helps in detection of `Chameleon Word(s)' which change their polarity across domains BIBREF11. The motivation comes from the fact that chameleon words directly affect the CDSA accuracy. For example, poignant is positive in movie domain whereas negative in many other domains viz. Beauty, Clothing etc.\n\nFor every common polar word between two domains, $L_1 \\ Distance$ between two vectors $[P_1,N_1]$ and $[P_2,N_2]$ is calculated as;\n\nThe overall distance is an average overall common polar words. Similar to SKLD, the confidence term based on Jaccard Similarity Coefficient is used to counter the imbalance of common polar word count between domain-pairs.\n\nDomain pairs are ranked in increasing order of final value.\n\nSimilarity Metrics ::: Metrics: Labelled Data ::: LM4: Entropy Change\nEntropy is the degree of randomness. A relatively lower change in entropy, when two domains are concatenated, indicates that the two domains contain similar topics and are therefore closer to each other. This metric is also our novel contribution. Using this metric, we calculate the percentage change in the entropy when the target domain is concatenated with the source domain. We calculate the entropy as the combination of entropy for unigrams, bigrams, trigrams, and quadrigrams. We consider only polar words for unigrams. For bi, tri and quadrigrams, we give priority to polar words by using a weighted entropy function and this weighted entropy $E$ is calculated as:\n\nHere, $X$ is the set of n-grams that contain at least one polar word, $Y$ is the set of n-grams which do not contain any polar word, and $w$ is the weight. For our experiments, we keep the value of $w$ as 1 for unigrams and 5 for bi, tri, and quadrigrams.\n\nWe then say that a source domain $D_2$ is more suitable for target domain $D_1$ as compared to source domain $D_3$ if;\n\nwhere $D_2+D_1$ indicates combined data obtained by mixing $D_1$ in $D_2$ and $\\Delta E$ indicates percentage change in entropy before and after mixing of source and target domains.\n\nNote that this metric offers the advantage of asymmetricity, unlike the other three metrics for labelled data.",
"For unlabelled target domain data, we utilize word and sentence embeddings-based similarity as a metric and use various embedding models",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM1: Word2Vec\nWe train SKIPGRAM models on all the domains to obtain word embeddings. We build models with 50 dimensions where the context window is chosen to be 5. For each domain pair, we then compare embeddings of common adjectives in both the domains by calculating Angular Similarity BIBREF17. It was observed that cosine similarity values were very close to each other, making it difficult to clearly separate domains. Since Angular Similarity distinguishes nearly parallel vectors much better, we use it instead of Cosine Similarity. We obtain a similarity value by averaging over all common adjectives. For the final similarity value of this metric, we use Jaccard Similarity Coefficient here as well:\n\nFor a target domain, source domains are ranked in decreasing order of final similarity value.\n\nSimilarity Metrics ::: Metrics: Unlabelled Data ::: ULM2: Doc2Vec\nDoc2Vec represents each sentence by a dense vector which is trained to predict words in the sentence, given the model. It tries to overcome the weaknesses of the bag-of-words model. Similar to Word2Vec, we train Doc2Vec models on each domain to extract sentence vectors. We train the models over 100 epochs for 100 dimensions, where the learning rate is 0.025. Since we can no longer leverage adjectives for sentiment, we use SentiWordnet for assigning sentiment scores (ranging from -1 to +1 where -1 denotes a negative sentiment, and +1 denotes a positive sentiment) to reviews (as detailed above) and select reviews which have a score above a certain threshold. We have empirically arrived at $\\pm 0.01$ as the threshold value. Any review with a score outside this window is selected. We also restrict the length of reviews to a maximum of 100 words to reduce sparsity.\n\nAfter filtering out reviews with sentiment score less than the threshold value, we are left with a minimum of 8000 reviews per domain. We train on 7500 reviews form each domain and test on 500 reviews. To compare a domain-pair $(D_1,D_2)$ where $D_1$ is the source domain and $D_2$ is the target domain, we compute Angular Similarity between two vectors $V_1$ and $V_2$. $V_1$ is obtained by taking an average over 500 test vectors (from $D_1$) inferred from the model trained on $D_1$. $V_2$ is obtained in a similar manner, except that the test data is from $D_2$. Figure FIGREF30 shows the experimental setup for this metric.\n\nSimilarity Metrics ::: Metrics: Unlabelled Data ::: ULM3: GloVe\nBoth Word2Vec and GloVe learn vector representations of words from their co-occurrence information. However, GloVe is different in the sense that it is a count-based model. In this metric, we use GloVe embeddings for adjectives shared by domain-pairs. We train GloVe models for each domain over 50 epochs, for 50 dimensions with a learning rate of 0.05. For computing similarity of a domain-pair, we follow the same procedure as described under the Word2Vec metric. The final similarity value is obtained using equation (DISPLAY_FORM29).\n\nSimilarity Metrics ::: Metrics: Unlabelled Data ::: ULM4 and ULM5: FastText\nWe train monolingual word embeddings-based models for each domain using the FastText library. We train these models with 100 dimensions and 0.1 as the learning rate. The size of the context window is limited to 5 since FastText also uses sub-word information. Our model takes into account character n-grams from 3 to 6 characters, and we train our model over 5 epochs. We use the default loss function (softmax) for training.\n\nWe devise two different metrics out of FastText models to calculate the similarity between domain-pairs. In the first metric (ULM4), we compute the Angular Similarity between the word vectors for all the common adjectives, and for each domain pair just like Word2Vec and GloVe. Overall, similarity for a domain pair is calculated using equation (DISPLAY_FORM29). As an additional metric (ULM5), we extract sentence vectors for reviews and follow a procedure similar to Doc2Vec. SentiWordnet is used to filter out train and test data using the same threshold window of $\\pm 0.01$.\n\nSimilarity Metrics ::: Metrics: Unlabelled Data ::: ULM6: ELMo\nWe use the pre-trained deep contextualized word representation model provided by the ELMo library. Unlike Word2Vec, GloVe, and FastText, ELMo gives multiple embeddings for a word based on different contexts it appears in the corpus.\n\nIn ELMo, higher-level LSTM states capture the context-dependent aspects of word meaning. Therefore, we use only the topmost layer for word embeddings with 1024 dimensions. Multiple contextual embeddings of a word are averaged to obtain a single vector. We again use average Angular Similarity of word embeddings for common adjectives to compare domain-pairs along with Jaccard Similarity Coefficient. The final similarity value is obtained using equation (DISPLAY_FORM29).\n\nSimilarity Metrics ::: Metrics: Unlabelled Data ::: ULM7: Universal Sentence Encoder\nOne of the most recent contributions to the area of sentence embeddings is the Universal Sentence Encoder. Its transformer-based sentence encoding model constructs sentence embeddings using the encoding sub-graph of the transformer architecture BIBREF19. We leverage these embeddings and devise a metric for our work.\n\nWe extract sentence vectors of reviews in each domain using tensorflow-hub model toolkit. The dimensions of each vector are 512. To find out the similarity between a domain-pair, we extract top 500 reviews from both domains based on the sentiment score acquired using SentiWordnet (as detailed above) and average over them to get two vectors with 512 dimensions each. After that, we find out the Angular Similarity between these vectors to rank all source domains for a particular target domain in decreasing order of similarity."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In table TABREF6, we present the n-gram percent match among the domain data used in our experiments. We observe that the n-gram match from among this corpora is relatively low and simple corpus similarity measures which use orthographic techniques cannot be used to obtain domain similarity. Hence, we propose the use of the metrics detailed below to perform our experiments.",
"We use a total of 11 metrics over two scenarios: the first that uses labelled data, while the second that uses unlabelled data.",
"We explain all our metrics in detail later in this section. These 11 metrics can also be classified into two categories:",
"Symmetric Metrics - The metrics which consider domain-pairs $(D_1,D_2)$ and $(D_2,D_1)$ as the same and provide similar results for them viz. Significant Words Overlap, Chameleon Words Similarity, Symmetric KL Divergence, Word2Vec embeddings, GloVe embeddings, FastText word embeddings, ELMo based embeddings and Universal Sentence Encoder based embeddings.",
"Asymmetric Metrics - The metrics which are 2-way in nature i.e., $(D_1,D_2)$ and $(D_2,D_1)$ have different similarity values viz. Entropy Change, Doc2Vec embeddings, and FastText sentence embeddings. These metrics offer additional advantage as they can help decide which domain to train from and which domain to test on amongst $D_1$ and $D_2$.",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM1: Significant Words Overlap",
"All words in a domain are not significant for sentiment expression. For example, comfortable is significant in the `Clothing' domain but not as significant in the `Movie' domain. In this metric, we build upon existing work by sharma2018identifying and extract significant words from each domain using the $\\chi ^2$ test. This method relies on computing the statistical significance of a word based on the polarity of that word in the domain. For our experiments, we consider only the words which appear at least 10 times in the corpus and have a $\\chi ^2$ value greater than or equal to 1. The $\\chi ^2$ value is calculated as follows:",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM2: Symmetric KL-Divergence (SKLD)",
"KL Divergence can be used to compare the probabilistic distribution of polar words in two domains BIBREF10. A lower KL Divergence score indicates that the probabilistic distribution of polar words in two domains is identical. This implies that the domains are close to each other, in terms of sentiment similarity. Therefore, to rank source domains for a target domain using this metric, we inherit the concept of symmetric KL Divergence proposed by murthy2018judicious and use it to compute average Symmetric KL-Divergence of common polar words shared by a domain-pair. We label a word as `polar' for a domain if,",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM3: Chameleon Words Similarity",
"This metric is our novel contribution for domain adaptability evaluation. It helps in detection of `Chameleon Word(s)' which change their polarity across domains BIBREF11. The motivation comes from the fact that chameleon words directly affect the CDSA accuracy. For example, poignant is positive in movie domain whereas negative in many other domains viz. Beauty, Clothing etc.",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM4: Entropy Change",
"Entropy is the degree of randomness. A relatively lower change in entropy, when two domains are concatenated, indicates that the two domains contain similar topics and are therefore closer to each other. This metric is also our novel contribution. Using this metric, we calculate the percentage change in the entropy when the target domain is concatenated with the source domain. We calculate the entropy as the combination of entropy for unigrams, bigrams, trigrams, and quadrigrams. We consider only polar words for unigrams. For bi, tri and quadrigrams, we give priority to polar words by using a weighted entropy function and this weighted entropy $E$ is calculated as:",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM1: Word2Vec",
"We train SKIPGRAM models on all the domains to obtain word embeddings. We build models with 50 dimensions where the context window is chosen to be 5. For each domain pair, we then compare embeddings of common adjectives in both the domains by calculating Angular Similarity BIBREF17. It was observed that cosine similarity values were very close to each other, making it difficult to clearly separate domains. Since Angular Similarity distinguishes nearly parallel vectors much better, we use it instead of Cosine Similarity. We obtain a similarity value by averaging over all common adjectives. For the final similarity value of this metric, we use Jaccard Similarity Coefficient here as well:",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM2: Doc2Vec",
"Doc2Vec represents each sentence by a dense vector which is trained to predict words in the sentence, given the model. It tries to overcome the weaknesses of the bag-of-words model. Similar to Word2Vec, we train Doc2Vec models on each domain to extract sentence vectors. We train the models over 100 epochs for 100 dimensions, where the learning rate is 0.025. Since we can no longer leverage adjectives for sentiment, we use SentiWordnet for assigning sentiment scores (ranging from -1 to +1 where -1 denotes a negative sentiment, and +1 denotes a positive sentiment) to reviews (as detailed above) and select reviews which have a score above a certain threshold. We have empirically arrived at $\\pm 0.01$ as the threshold value. Any review with a score outside this window is selected. We also restrict the length of reviews to a maximum of 100 words to reduce sparsity.",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM3: GloVe",
"Both Word2Vec and GloVe learn vector representations of words from their co-occurrence information. However, GloVe is different in the sense that it is a count-based model. In this metric, we use GloVe embeddings for adjectives shared by domain-pairs. We train GloVe models for each domain over 50 epochs, for 50 dimensions with a learning rate of 0.05. For computing similarity of a domain-pair, we follow the same procedure as described under the Word2Vec metric. The final similarity value is obtained using equation (DISPLAY_FORM29).",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM4 and ULM5: FastText",
"We train monolingual word embeddings-based models for each domain using the FastText library. We train these models with 100 dimensions and 0.1 as the learning rate. The size of the context window is limited to 5 since FastText also uses sub-word information. Our model takes into account character n-grams from 3 to 6 characters, and we train our model over 5 epochs. We use the default loss function (softmax) for training.",
"We devise two different metrics out of FastText models to calculate the similarity between domain-pairs. In the first metric (ULM4), we compute the Angular Similarity between the word vectors for all the common adjectives, and for each domain pair just like Word2Vec and GloVe. Overall, similarity for a domain pair is calculated using equation (DISPLAY_FORM29). As an additional metric (ULM5), we extract sentence vectors for reviews and follow a procedure similar to Doc2Vec. SentiWordnet is used to filter out train and test data using the same threshold window of $\\pm 0.01$.",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM6: ELMo",
"We use the pre-trained deep contextualized word representation model provided by the ELMo library. Unlike Word2Vec, GloVe, and FastText, ELMo gives multiple embeddings for a word based on different contexts it appears in the corpus."
],
"extractive_spans": [
"LM1: Significant Words Overlap",
"LM2: Symmetric KL-Divergence (SKLD)",
"LM3: Chameleon Words Similarity",
"LM4: Entropy Change",
"ULM1: Word2Vec",
"ULM2: Doc2Vec",
"ULM3: GloVe",
"ULM4 and ULM5: FastText",
" ULM6: ELMo"
],
"free_form_answer": "",
"highlighted_evidence": [
"In table TABREF6, we present the n-gram percent match among the domain data used in our experiments. We observe that the n-gram match from among this corpora is relatively low and simple corpus similarity measures which use orthographic techniques cannot be used to obtain domain similarity. Hence, we propose the use of the metrics detailed below to perform our experiments.",
"We use a total of 11 metrics over two scenarios: the first that uses labelled data, while the second that uses unlabelled data.",
"We explain all our metrics in detail later in this section. These 11 metrics can also be classified into two categories:\n\nSymmetric Metrics - The metrics which consider domain-pairs $(D_1,D_2)$ and $(D_2,D_1)$ as the same and provide similar results for them viz. Significant Words Overlap, Chameleon Words Similarity, Symmetric KL Divergence, Word2Vec embeddings, GloVe embeddings, FastText word embeddings, ELMo based embeddings and Universal Sentence Encoder based embeddings.\n\nAsymmetric Metrics - The metrics which are 2-way in nature i.e., $(D_1,D_2)$ and $(D_2,D_1)$ have different similarity values viz. Entropy Change, Doc2Vec embeddings, and FastText sentence embeddings. These metrics offer additional advantage as they can help decide which domain to train from and which domain to test on amongst $D_1$ and $D_2$.",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM1: Significant Words Overlap\nAll words in a domain are not significant for sentiment expression. For example, comfortable is significant in the `Clothing' domain but not as significant in the `Movie' domain.",
"Metrics: Labelled Data ::: LM2: Symmetric KL-Divergence (SKLD)\nKL Divergence can be used to compare the probabilistic distribution of polar words in two domains BIBREF10. A lower KL Divergence score indicates that the probabilistic distribution of polar words in two domains is identical. ",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM3: Chameleon Words Similarity\nThis metric is our novel contribution for domain adaptability evaluation. It helps in detection of `Chameleon Word(s)' which change their polarity across domains BIBREF11. ",
"Similarity Metrics ::: Metrics: Labelled Data ::: LM4: Entropy Change\nEntropy is the degree of randomness. A relatively lower change in entropy, when two domains are concatenated, indicates that the two domains contain similar topics and are therefore closer to each other. ",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM1: Word2Vec\nWe train SKIPGRAM models on all the domains to obtain word embeddings. We build models with 50 dimensions where the context window is chosen to be 5.",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM2: Doc2Vec\nDoc2Vec represents each sentence by a dense vector which is trained to predict words in the sentence, given the model. It tries to overcome the weaknesses of the bag-of-words model. Similar to Word2Vec, we train Doc2Vec models on each domain to extract sentence vectors. ",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM3: GloVe\nBoth Word2Vec and GloVe learn vector representations of words from their co-occurrence information. However, GloVe is different in the sense that it is a count-based model. In this metric, we use GloVe embeddings for adjectives shared by domain-pairs.",
"imilarity Metrics ::: Metrics: Unlabelled Data ::: ULM4 and ULM5: FastText\nWe train monolingual word embeddings-based models for each domain using the FastText library. We train these models with 100 dimensions and 0.1 as the learning rate. ",
"We devise two different metrics out of FastText models to calculate the similarity between domain-pairs. In the first metric (ULM4), we compute the Angular Similarity between the word vectors for all the common adjectives, and for each domain pair just like Word2Vec and GloVe. Overall, similarity for a domain pair is calculated using equation (DISPLAY_FORM29). As an additional metric (ULM5), we extract sentence vectors for reviews and follow a procedure similar to Doc2Vec. SentiWordnet is used to filter out train and test data using the same threshold window of $\\pm 0.01$.",
"Similarity Metrics ::: Metrics: Unlabelled Data ::: ULM6: ELMo\nWe use the pre-trained deep contextualized word representation model provided by the ELMo library. Unlike Word2Vec, GloVe, and FastText, ELMo gives multiple embeddings for a word based on different contexts it appears in the corpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"9ecd5f7d9819b8c738713c7461880b71a1abdb0a",
"e942d7704d67f47525ca48330e3696602bd51d87"
],
"answer": [
{
"evidence": [
"The core of this work is a sentiment classifier for different domains. We use the DRANZIERA benchmark dataset BIBREF9, which consists of Amazon reviews from 20 domains such as automatives, baby products, beauty products, etc. The detailed list can be seen in Table 1. To ensure that the datasets are balanced across all domains, we randomly select 5000 positive and 5000 negative reviews from each domain. The length of the reviews ranges from 5 words to 1654 words across all domains, with an average length ranging from 71 words to 125 words per domain. We point the reader to the original paper for detailed dataset statistics.",
"FLOAT SELECTED: Table 1: Accuracy percentage for all train-test pairs. Domains on rows are source domains and columns are target domains. Domain labels are D1: Amazon Instant Video, D2: Automotive, D3: Baby, D4: Beauty, D5: Books, D6: Clothing Accessories, D7: Electronics, D8: Health, D9: Home, D10: Kitchen, D11: Movies TV, D12: Music, D13: Office Products, D14: Patio, D15: Pet Supplies, D15: Shoes, D16: Software, D17: Sports Outdoors, D18: Tools Home Improvement, D19: Toys Games, D20: Video Games."
],
"extractive_spans": [],
"free_form_answer": "Amazon Instant Video, Automotive, Baby, Beauty, Books, Clothing Accessories, Electronics, Health, Home, Kitchen, Movies, Music, Office Products, Patio, Pet Supplies, Shoes, Software, Sports Outdoors, Tools Home Improvement, Toys Games, Video Games.",
"highlighted_evidence": [
"We use the DRANZIERA benchmark dataset BIBREF9, which consists of Amazon reviews from 20 domains such as automatives, baby products, beauty products, etc. The detailed list can be seen in Table 1.",
"FLOAT SELECTED: Table 1: Accuracy percentage for all train-test pairs. Domains on rows are source domains and columns are target domains. Domain labels are D1: Amazon Instant Video, D2: Automotive, D3: Baby, D4: Beauty, D5: Books, D6: Clothing Accessories, D7: Electronics, D8: Health, D9: Home, D10: Kitchen, D11: Movies TV, D12: Music, D13: Office Products, D14: Patio, D15: Pet Supplies, D15: Shoes, D16: Software, D17: Sports Outdoors, D18: Tools Home Improvement, D19: Toys Games, D20: Video Games."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF31 shows the average CDSA accuracy degradation in each domain when it is selected as the source domain, and the rest of the domains are selected as the target domain. We also show in-domain sentiment analysis accuracy, the best source domain (on which CDSA classifier is trained), and the best target domain (on which CDSA classifier is tested) in the table. D15 suffers from the maximum average accuracy degradation, and D18 performs the best with least average accuracy degradation, which is also supported by its number of appearances i.e., 4, as the best source domain in the table. As for the best target domain, D9 appears the maximum number of times."
],
"extractive_spans": [],
"free_form_answer": "Amazon Instant Video\nAutomotive\nBaby\nBeauty\nBooks\nClothing Accessories\nElectronics\nHealth\nHome Kitchen\nMovies TV\nMusic\nOffice Products\nPatio\nPet Supplies\nShoes\nSoftware\nSports Outdoors\nTools Home Improvement\nToys Games\nVideo Games",
"highlighted_evidence": [
"Table TABREF31 shows the average CDSA accuracy degradation in each domain when it is selected as the source domain, and the rest of the domains are selected as the target domain. We also show in-domain sentiment analysis accuracy, the best source domain (on which CDSA classifier is trained), and the best target domain (on which CDSA classifier is tested) in the table."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What datasets are available for CDSA task?",
"What two novel metrics proposed?",
"What similarity metrics have been tried?",
"What 20 domains are available for selection of source domain?"
],
"question_id": [
"468eb961215a554ace8088fa9097a7ad239f2d71",
"57d07d2b509c5860880583efe2ed4c5620a96747",
"d126d5d6b7cfaacd58494f1879547be9e91d1364",
"7dca806426058d59f4a9a4873e9219d65aea0987"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Accuracy percentage for all train-test pairs. Domains on rows are source domains and columns are target domains. Domain labels are D1: Amazon Instant Video, D2: Automotive, D3: Baby, D4: Beauty, D5: Books, D6: Clothing Accessories, D7: Electronics, D8: Health, D9: Home, D10: Kitchen, D11: Movies TV, D12: Music, D13: Office Products, D14: Patio, D15: Pet Supplies, D15: Shoes, D16: Software, D17: Sports Outdoors, D18: Tools Home Improvement, D19: Toys Games, D20: Video Games.",
"Table 2: N-grams co-occurrence matrix depicting the percent point match among the top-10 bigrams, trigrams and quadgrams for the data used in each domain. Domain labels are D1: Amazon Instant Video, D2: Automotive, D3: Baby, D4: Beauty, D5: Books, D6: Clothing Accessories, D7: Electronics, D8: Health, D9: Home, D10: Kitchen, D11: Movies TV, D12: Music, D13: Office Products, D14: Patio, D15: Pet Supplies, D15: Shoes, D16: Software, D17: Sports Outdoors, D18: Tools Home Improvement, D19: Toys Games, D20: Video Games.",
"Figure 1: Experimental Setup for Doc2Vec",
"Table 3: Our reccomendation chart based on CDSA results: In Domain Accuracy(when source and target domains are same), Average CDSA Accuracy Degradation(average cross-domain testing accuracy loss over all target domains), Best Source Domain, Best Target Domain for each domain.",
"Table 4: Precision and Normalised Ranking Accuracy (NRA) for top-K source domain matching over all domains.",
"Figure 2: Precision for K=5 over top 10 domains. Precision is the intersection between the top-K source domains predicted by the metric and top-K source domains as per CDSA accuracy.",
"Figure 3: Ranking Accuracy for K=5 over top 10 domains. Ranking accuracy is the number of predicted source domains which are ranked correctly by the metric."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"5-Figure1-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"8-Figure2-1.png",
"8-Figure3-1.png"
]
} | [
"What 20 domains are available for selection of source domain?"
] | [
[
"2004.04478-3-Table1-1.png",
"2004.04478-Results-1",
"2004.04478-Sentiment Classifier-0"
]
] | [
"Amazon Instant Video\nAutomotive\nBaby\nBeauty\nBooks\nClothing Accessories\nElectronics\nHealth\nHome Kitchen\nMovies TV\nMusic\nOffice Products\nPatio\nPet Supplies\nShoes\nSoftware\nSports Outdoors\nTools Home Improvement\nToys Games\nVideo Games"
] | 217 |
1805.04558 | NRC-Canada at SMM4H Shared Task: Classifying Tweets Mentioning Adverse Drug Reactions and Medication Intake | Our team, NRC-Canada, participated in two shared tasks at the AMIA-2017 Workshop on Social Media Mining for Health Applications (SMM4H): Task 1 - classification of tweets mentioning adverse drug reactions, and Task 2 - classification of tweets describing personal medication intake. For both tasks, we trained Support Vector Machine classifiers using a variety of surface-form, sentiment, and domain-specific features. With nine teams participating in each task, our submissions ranked first on Task 1 and third on Task 2. Handling considerable class imbalance proved crucial for Task 1. We applied an under-sampling technique to reduce class imbalance (from about 1:10 to 1:2). Standard n-gram features, n-grams generalized over domain terms, as well as general-domain and domain-specific word embeddings had a substantial impact on the overall performance in both tasks. On the other hand, including sentiment lexicon features did not result in any improvement. | {
"paragraphs": [
[
"Adverse drug reactions (ADR)—unwanted or harmful reactions resulting from correct medical drug use—present a significant and costly public health problem. BIBREF0 Detecting, assessing, and preventing these events are the tasks of pharmacovigilance. In the pre-trial and trial stages of drug development, the number of people taking a drug is carefully controlled, and the collection of ADR data is centralized. However, after the drug is available widely, post-marketing surveillance often requires the collection and merging of data from disparate sources, BIBREF1 including patient-initiated spontaneous reporting. Unfortunately, adverse reactions to drugs are grossly underreported to health professionals. BIBREF2 , BIBREF3 Considerable issues with patient-initiated reporting have been identified, including various types of reporting biases and causal attributions of adverse events. BIBREF4 , BIBREF5 , BIBREF6 Nevertheless, a large number of people, freely and spontaneously, report ADRs on social media. The potential availability of inexpensive, large-scale, and real-time data on ADRs makes social media a valuable resource for pharmacovigilance.",
"Information required for pharmacovigilance includes a reported adverse drug reaction, a linked drug referred to by its full, abbreviated, or generic name, and an indication whether it was the social media post author that experienced the adverse event. However, there are considerable challenges in automatically extracting this information from free-text social media data. Social media texts are often short and informal, and include non-standard abbreviations and creative language. Drug names or their effects may be mis-spelled; they may be used metaphorically (e.g., Physics is like higher level maths on steroids). Drug names might have other non-drug related meanings (e.g., ecstasy). An adverse event may be negated or only expected (e.g., I bet I'll be running to the bathroom all night), or it may not apply to the author of the post at all (e.g., a re-tweet of a press release).",
"The shared task challenge organized as part of the AMIA-2017 Workshop on Social Media Mining for Health Applications (SMM4H) focused on Twitter data and had three tasks: Task 1 - recognizing whether a tweet is reporting an adverse drug reaction, Task 2 - inferring whether a tweet is reporting the intake of a medication by the tweeter, and Task 3 - mapping a free-text ADR to a standardized MEDDRA term. Our team made submissions for Task 1 and Task 2. For both tasks, we trained Support Vector Machine classifiers using a variety of surface-form, sentiment, and domain-specific features. Handling class imbalance with under-sampling was particularly helpful. Our submissions obtained F-scores of 0.435 on Task 1 and 0.673 on Task 2, resulting in a rank of first and third, respectively. (Nine teams participated in each task.) We make the resources created as part of this project freely available at the project webpage: http://saifmohammad.com/WebPages/tweets4health.htm."
],
[
"Below we describe in detail the two tasks we participated in, Task 1 and Task 2.",
" Task 1: Classification of Tweets for Adverse Drug Reaction",
"Task 1 was formulated as follows: given a tweet, determine whether it mentions an adverse drug reaction. This was a binary classification task:",
"The official evaluation metric was the F-score for class 1 (ADR): INLINEFORM0 ",
"The data for this task was created as part of a large project on ADR detection from social media by the DIEGO lab at Arizona State University. The tweets were collected using the generic and brand names of the drugs as well as their phonetic misspellings. Two domain experts under the guidance of a pharmacology expert annotated the tweets for the presence or absence of an ADR mention. The inter-annotator agreement for the two annotators was Cohens Kappa INLINEFORM0 . BIBREF7 ",
"Two labeled datasets were provided to the participants: a training set containing 10,822 tweets and a development set containing 4,845 tweets. These datasets were distributed as lists of tweet IDs, and the participants needed to download the tweets using the provided Python script. However, only about 60–70% of the tweets were accessible at the time of download (May 2017). The training set contained several hundreds of duplicate or near-duplicate messages, which we decided to remove. Near-duplicates were defined as tweets containing mostly the same text but differing in user mentions, punctuation, or other non-essential context. A separate test set of 9,961 tweets was provided without labels at the evaluation period. This set was distributed to the participants, in full, by email. Table TABREF1 shows the number of instances we used for training and testing our model.",
"Task 1 was a rerun of the shared task organized in 2016. BIBREF7 The best result obtained in 2016 was INLINEFORM0 . BIBREF8 The participants in the 2016 challenge employed various statistical machine learning techniques, such as Support Vector Machines, Maximum Entropy classifiers, Random Forests, and other ensembles. BIBREF8 , BIBREF9 A variety of features (e.g., word INLINEFORM1 -grams, word embeddings, sentiment, and topic models) as well as extensive medical resources (e.g., UMLS, lexicons of ADRs, drug lists, and lists of known drug-side effect pairs) were explored.",
" Task 2: Classification of Tweets for Medication Intake",
"Task 2 was formulated as follows: given a tweet, determine if it mentions personal medication intake, possible medication intake, or no intake is mentioned. This was a multi-class classification problem with three classes:",
"The official evaluation metric for this task was micro-averaged F-score of the class 1 (intake) and class 2 (possible intake): INLINEFORM0 INLINEFORM1 ",
"Information on how the data was collected and annotated was not available until after the evaluation.",
"Two labeled datasets were provided to the participants: a training set containing 8,000 tweets and a development set containing 2,260 tweets. As for Task 1, the training and development sets were distributed through tweet IDs and a download script. Around 95% of the tweets were accessible through download. Again, we removed duplicate and near-duplicate messages. A separate test set of 7,513 tweets was provided without labels at the evaluation period. This set was distributed to the participants, in full, by email. Table TABREF7 shows the number of instances we used for training and testing our model.",
"For each task, three submissions were allowed from each participating team."
],
[
"Both our systems, for Task 1 and Task 2, share the same classification framework and feature pool. The specific configurations of features and parameters were chosen for each task separately through cross-validation experiments (see Section SECREF31 )."
],
[
"For both tasks, we trained linear-kernel Support Vector Machine (SVM) classifiers. Past work has shown that SVMs are effective on text categorization tasks and robust when working with large feature spaces. In our cross-validation experiments on the training data, a linear-kernel SVM trained with the features described below was able to obtain better performance than a number of other statistical machine-learning algorithms, such as Stochastic Gradient Descent, AdaBoost, Random Forests, as well SVMs with other kernels (e.g., RBF, polynomic). We used an in-house implementation of SVM.",
"Handling Class Imbalance: For Task 1 (Classification of tweets for ADR), the provided datasets were highly imbalanced: the ADR class occurred in less than 12% of instances in the training set and less than 8% in the development and test sets. Most conventional machine-learning algorithms experience difficulty with such data, classifying most of the instances into the majority class. Several techniques have been proposed to address the issue of class imbalance, including over-sampling, under-sampling, cost-sensitive learning, and ensembles. BIBREF10 We experimented with several such techniques. The best performance in our cross-validation experiments was obtained using under-sampling with the class proportion 1:2. To train the model, we provided the classifier with all available data for the minority class (ADR) and a randomly sampled subset of the majority class (non-ADR) data in such a way that the number of instances in the majority class was twice the number of instances in the minority class. We found that this strategy significantly outperformed the more traditional balanced under-sampling where the majority class is sub-sampled to create a balanced class distribution. In one of our submissions for Task 1 (submission 3), we created an ensemble of three classifiers trained on the full set of instances in the minority class (ADR) and different subsets of the majority class (non-ADR) data. We varied the proportion of the majority class instances to the minority class instances: 1:2, 1:3, and 1:4. The final predictions were obtained by majority voting on the predictions of the three individual classifiers.",
"For Task 2 (Classification of tweets for medication intake), the provided datasets were also imbalanced but not as much as for Task 1: the class proportion in all subsets was close to 1:2:3. However, even for this task, we found some of the techniques for reducing class imbalance helpful. In particular, training an SVM classifier with different class weights improved the performance in the cross-validation experiments. These class weights are used to increase the cost of misclassification errors for the corresponding classes. The cost for a class is calculated as the generic cost parameter (parameter C in SVM) multiplied by the class weight. The best performance on the training data was achieved with class weights set to 4 for class 1 (intake), 2 for class 2 (possible intake), and 1 for class 3 (non-intake).",
"Preprocessing: The following pre-processing steps were performed. URLs and user mentions were normalized to http://someurl and @username, respectively. Tweets were tokenized with the CMU Twitter NLP tool. BIBREF11 "
],
[
"The classification model leverages a variety of general textual features as well as sentiment and domain-specific features described below. Many features were inspired by previous work on ADR BIBREF12 , BIBREF8 , BIBREF9 and our work on sentiment analysis (such as the winning system in the SemEval-2013 task on sentiment analysis in Twitter BIBREF13 and best performing stance detection system BIBREF14 ).",
" General Textual Features",
"The following surface-form features were used:",
"",
" INLINEFORM0 -grams: word INLINEFORM1 -grams (contiguous sequences of INLINEFORM2 tokens), non-contiguous word INLINEFORM3 -grams ( INLINEFORM4 -grams with one token replaced by *), character INLINEFORM5 -grams (contiguous sequences of INLINEFORM6 characters), unigram stems obtained with the Porter stemming algorithm;",
"",
"General-domain word embeddings:",
"",
"dense word representations generated with word2vec on ten million English-language tweets, summed over all tokens in the tweet,",
"",
"word embeddings distributed as part of ConceptNet 5.5 BIBREF15 , summed over all tokens in the tweet;",
"",
"General-domain word clusters: presence of tokens from the word clusters generated with the Brown clustering algorithm on 56 million English-language tweets; BIBREF11 ",
"",
"Negation: presence of simple negators (e.g., not, never); negation also affects the INLINEFORM0 -gram features—a term INLINEFORM1 becomes INLINEFORM2 if it occurs after a negator and before a punctuation mark;",
"",
"Twitter-specific features: the number of tokens with all characters in upper case, the number of hashtags, presence of positive and negative emoticons, whether the last token is a positive or negative emoticon, the number of elongated words (e.g., soooo);",
"",
"Punctuation: presence of exclamation and question marks, whether the last token contains an exclamation or question mark.",
"Domain-Specific Features",
"To generate domain-specific features, we used the following domain resources:",
"",
"Medication list: we compiled a medication list by selecting all one-word medication names from RxNorm (e.g, acetaminophen, nicorette, zoloft) since most of the medications mentioned in the training datasets were one-word strings.",
"",
"Pronoun Lexicon: we compiled a lexicon of first-person pronouns (e.g., I, ours, we'll), second-person pronouns (e.g., you, yourself), and third-person pronouns (e.g., them, mom's, parents').",
"",
"ADR Lexicon: a list of 13,699 ADR concepts compiled from COSTART, SIDER, CHV, and drug-related tweets by the DIEGO lab; BIBREF16 ",
"",
"domain word embeddings: dense word representations generated by the DIEGO lab by applying word2vec on one million tweets mentioning medications; BIBREF16 ",
"",
"domain word clusters: word clusters generated by the DIEGO lab using the word2vec tool to perform K-means clustering on the above mentioned domain word embeddings. BIBREF16 ",
"From these resources, the following domain-specific features were generated:",
"",
" INLINEFORM0 -grams generalized over domain terms (or domain generalized INLINEFORM1 -grams, for short): INLINEFORM2 -grams where words or phrases representing a medication (from our medication list) or an adverse drug reaction (from the ADR lexicon) are replaced with <MED> INLINEFORM3 and <ADR>, respectively (e.g., <MED> INLINEFORM4 makes me);",
"",
"Pronoun Lexicon features: the number of tokens from the Pronoun lexicon matched in the tweet;",
"",
"domain word embeddings: the sum of the domain word embeddings for all tokens in the tweet;",
"",
"domain word clusters: presence of tokens from the domain word clusters.",
"Sentiment Lexicon Features",
"We generated features using the sentiment scores provided in the following lexicons: Hu and Liu Lexicon BIBREF17 , Norms of Valence, Arousal, and Dominance BIBREF18 , labMT BIBREF19 , and NRC Emoticon Lexicon BIBREF20 . The first three lexicons were created through manual annotation while the last one, NRC Emoticon Lexicon, was generated automatically from a large collection of tweets with emoticons. The following set of features were calculated separately for each tweet and each lexicon:",
"",
"the number of tokens with INLINEFORM0 ;",
"",
"the total score = INLINEFORM0 ;",
"",
"the maximal score = INLINEFORM0 ;",
"",
"the score of the last token in the tweet.",
" We experimented with a number of other existing manually created or automatically generated sentiment and emotion lexicons, such as the NRC Emotion Lexicon BIBREF21 and the NRC Hashtag Emotion Lexicon BIBREF22 (http://saifmohammad.com/ WebPages/lexicons.html), but did not observe any improvement in the cross-validation experiments. None of the sentiment lexicon features were effective in the cross-validation experiments on Task 1; therefore, we did not include them in the final feature set for this task."
],
[
"For each task, our team submitted three sets of predictions. The submissions differed in the sets of features and parameters used to train the classification models (Table TABREF32 ).",
"While developing the system for Task 1 we noticed that the results obtained through cross-validation on the training data were almost 13 percentage points higher than the results obtained by the model trained on the full training set and applied on the development set. This drop in performance was mostly due to a drop in precision. This suggests that the datasets had substantial differences in the language use, possibly because they were collected and annotated at separate times. Therefore, we decided to optimize the parameters and features for submission 1 and submission 2 using two different strategies. The models for the three submissions were trained as follows:",
"",
"Submission 1: we randomly split the development set into 5 equal folds. We trained a classification model on the combination of four folds and the full training set, and tested the model on the remaining fifth fold of the development set. The procedure was repeated five times, each time testing on a different fold. The feature set and the classification parameters that resulted in the best INLINEFORM0 were used to train the final model.",
"",
"Submission 2: the features and parameters were selected based on the performance of the model trained on the full training set and tested on the full development set.",
"",
"Submission 3: we used the same features and parameters as in submission 1, except we trained an ensemble of three models, varying the class distribution in the sub-sampling procedure (1:2, 1:3, and 1:4).",
" For Task 2, the features and parameters were selected based on the cross-validation results run on the combination of the training and development set. We randomly split the development set into 3 equal folds. We trained a classification model on the combination of two folds and the full training set, and tested the model on the remaining third fold of the development set. The procedure was repeated three times, each time testing on a different fold. The models for the three submissions were trained as follows:",
"Submission 1: we used the features and parameters that gave the best results during cross-validation.",
"",
"Submission 2: we used the same features and parameters as in submission 1, but added features derived from two domain resources: the ADR lexicon and the Pronoun lexicon.",
"",
"Submission 3: we used the same features as in submission 1, but changed the SVM C parameter to 0.1.",
" For both tasks and all submissions, the final models were trained on the combination of the full training set and full development set, and applied on the test set."
],
[
"Task 1 (Classification of Tweets for ADR)",
"The results for our three official submissions are presented in Table TABREF39 (rows c.1–c.3). The best results in INLINEFORM0 were obtained with submission 1 (row c.1). The results for submission 2 are the lowest, with F-measure being 3.5 percentage points lower than the result for submission 1 (row c.2). The ensemble classifier (submission 3) shows a slightly worse performance than the best result. However, in the post-competition experiments, we found that larger ensembles (with 7–11 classifiers, each trained on a random sub-sample of the majority class to reduce class imbalance to 1:2) outperform our best single-classifier model by over one percentage point with INLINEFORM1 reaching up to INLINEFORM2 (row d). Our best submission is ranked first among the nine teams participated in this task (rows b.1–b.3).",
"Table TABREF39 also shows the results for two baseline classifiers. The first baseline is a classifier that assigns class 1 (ADR) to all instances (row a.1). The performance of this baseline is very low ( INLINEFORM0 ) due to the small proportion of class 1 instances in the test set. The second baseline is an SVM classifier trained only on the unigram features (row a.2). Its performance is much higher than the performance of the first baseline, but substantially lower than that of our system. By adding a variety of textual and domain-specific features as well as applying under-sampling, we are able to improve the classification performance by almost ten percentage points in F-measure.",
"To investigate the impact of each feature group on the overall performance, we conduct ablation experiments where we repeat the same classification process but remove one feature group at a time. Table TABREF40 shows the results of these ablation experiments for our best system (submission 1). Comparing the two major groups of features, general textual features (row b) and domain-specific features (row c), we observe that they both have a substantial impact on the performance. Removing one of these groups leads to a two percentage points drop in INLINEFORM0 . The general textual features mostly affect recall of the ADR class (row b) while the domain-specific features impact precision (row c). Among the general textual features, the most influential feature is general-domain word embeddings (row b.2). Among the domain-specific features, INLINEFORM1 -grams generalized over domain terms (row c.1) and domain word embeddings (row c.3) provide noticeable contribution to the overall performance. In the Appendix, we provide a list of top 25 INLINEFORM2 -gram features (including INLINEFORM3 -grams generalized over domain terms) ranked by their importance in separating the two classes.",
"As mentioned before, the data for Task 1 has high class imbalance, which significantly affects performance. Not applying any of the techniques for handling class imbalance, results in a drop of more than ten percentage points in F-measure—the model assigns most of the instances to the majority (non-ADR) class (row d). Also, applying under-sampling with the balanced class distribution results in performance significantly worse ( INLINEFORM0 ) than the performance of the submission 1 where under-sampling with class distribution of 1:2 was applied.",
"Error analysis on our best submission showed that there were 395 false negative errors (tweets that report ADRs, but classified as non-ADR) and 582 false positives (non-ADR tweets classified as ADR). Most of the false negatives were due to the creative ways in which people express themselves (e.g., i have metformin tummy today :-( ). Large amounts of labeled training data or the use of semi-supervised techniques to take advantage of large unlabeled domain corpora may help improve the detection of ADRs in such tweets. False positives were caused mostly due to the confusion between ADRs and other relations between a medication and a symptom. Tweets may mention both a medication and a symptom, but the symptom may not be an ADR. The medication may have an unexpected positive effect (e.g., reversal of hair loss), or may alleviate an existing health condition. Sometimes, the relation between the medication and the symptom is not explicitly mentioned in a tweet, yet an ADR can be inferred by humans.",
" Task 2 (Classification of Tweets for Medication Intake)",
"The results for our three official submissions on Task 2 are presented in Table TABREF41 (rows c.1–c.3). The best results in INLINEFORM0 are achieved with submission 1 (row c.1). The results for the other two submissions, submission 2 and submission 3, are quite similar to the results of submission 1 in both precision and recall (rows c.2–c.3). Adding the features from the ADR lexicon and the Pronoun lexicon did not result in performance improvement on the test set. Our best system is ranked third among the nine teams participated in this task (rows b.1–b.3).",
"Table TABREF41 also shows the results for two baseline classifiers. The first baseline is a classifier that assigns class 2 (possible medication intake) to all instances (row a.1). Class 2 is the majority class among the two positive classes, class 1 and class 2, in the training set. The performance of this baseline is quite low ( INLINEFORM0 ) since class 2 covers only 36% of the instances in the test set. The second baseline is an SVM classifier trained only on the unigram features (row a.2). The performance of such a simple model is surprisingly high ( INLINEFORM1 ), only 4.7 percentage points below the top result in the competition.",
"Table TABREF42 shows the performance of our best system (submission 1) when one of the feature groups is removed. In this task, the general textual features (row b) played a bigger role in the overall performance than the domain-specific (row c) or sentiment lexicon (row d) features. Removing this group of features results in more than 2.5 percentage points drop in the F-measure affecting both precision and recall (row b). However, removing any one feature subgroup in this group (e.g., general INLINEFORM0 -grams, general clusters, general embeddings, etc.) results only in slight drop or even increase in the performance (rows b.1–b.4). This indicates that the features in this group capture similar information. Among the domain-specific features, the INLINEFORM1 -grams generalized over domain terms are the most useful. The model trained without these INLINEFORM2 -grams features performs almost one percentage point worse than the model that uses all the features (row c.1). The sentiment lexicon features were not helpful (row d).",
"Our strategy of handling class imbalance through class weights did not prove successful on the test set (even though it resulted in increase of one point in F-measure in the cross-validation experiments). The model trained with the default class weights of 1 for all classes performs 0.7 percentage points better than the model trained with the class weights selected in cross-validation (row e).",
"The difference in how people can express medication intake vs. how they express that they have not taken a medication can be rather subtle. For example, the expression I need Tylenol indicates that the person has not taken the medication yet (class 3), whereas the expression I need more Tylenol indicates that the person has taken the medication (class 1). In still other instances, the word more might not be the deciding factor in whether a medication was taken or not (e.g., more Tylenol didn't help). A useful avenue of future work is to explore the role function words play in determining the semantics of a sentence, specifically, when they imply medication intake, when they imply the lack of medication intake, and when they are not relevant to determining medication intake."
],
[
"Our submissions to the 2017 SMM4H Shared Tasks Workshop obtained the first and third ranks in Task1 and Task 2, respectively. In Task 1, the systems had to determine whether a given tweet mentions an adverse drug reaction. In Task 2, the goal was to label a given tweet with one of the three classes: personal medication intake, possible medication intake, or non-intake. For both tasks, we trained an SVM classifier leveraging a number of textual, sentiment, and domain-specific features. Our post-competition experiments demonstrate that the most influential features in our system for Task 1 were general-domain word embeddings, domain-specific word embeddings, and INLINEFORM0 -grams generalized over domain terms. Moreover, under-sampling the majority class (non-ADR) to reduce class imbalance to 1:2 proved crucial to the success of our submission. Similarly, INLINEFORM1 -grams generalized over domain terms improved results significantly in Task 2. On the other hand, sentiment lexicon features were not helpful in both tasks.",
".2"
],
[
"We list the top 25 INLINEFORM0 -gram features (word INLINEFORM1 -grams and INLINEFORM2 -grams generalized over domain terms) ranked by mutual information of the presence/absence of INLINEFORM3 -gram features ( INLINEFORM4 ) and class labels ( INLINEFORM5 ): INLINEFORM6 ",
"where INLINEFORM0 for Task 1 and INLINEFORM1 for Task 2.",
"Here, <ADR> INLINEFORM0 represents a word or a phrase from the ADR lexicon; <MED> INLINEFORM1 represents a medication name from our one-word medication list.",
"4 Task 1",
"1. me",
"2. withdraw",
"3. i",
"4. makes",
"5. <ADR> INLINEFORM0 .",
"6. makes me",
"7. feel",
"8. me <ADR>",
"9. <MED> INLINEFORM0 <ADR>",
"10. made me",
"11. withdrawal",
"12. <MED> INLINEFORM0 makes",
"13. my",
" INLINEFORM0 ",
"14. <MED> INLINEFORM0 makes me",
"15. gain",
"16. weight",
"17. <ADR> INLINEFORM0 and",
"18. headache",
"19. made",
"20. tired",
"21. rivaroxaban diary",
"22. withdrawals",
"23. zomby",
"24. day",
"25. <MED> INLINEFORM0 diary",
"Task 2",
"1. steroids",
"2. need",
"3. i need",
"4. took",
"5. on steroids",
"6. on <MED>",
"7. i",
"8. i took",
"9. http://someurl",
"10. @username",
"11. her",
"12. on",
"13. him",
" INLINEFORM0 ",
"14. you",
"15. he",
"16. me",
"17. need a",
"18. kick",
"19. i need a",
"20. she",
"21. headache",
"22. kick in",
"23. this <MED>",
"24. need a <MED>",
"25. need <MED>"
]
],
"section_name": [
"Introduction",
"Task and Data Description",
"System Description",
"Machine Learning Framework",
"Features",
"Official Submissions",
"Results and Discussion",
"Conclusion",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"284b16b0fd61c8dac15bb827a2e63d0560a2e796",
"d621ad612b5447484ad3594a93cc558578a2f10b"
],
"answer": [
{
"evidence": [
"Table TABREF42 shows the performance of our best system (submission 1) when one of the feature groups is removed. In this task, the general textual features (row b) played a bigger role in the overall performance than the domain-specific (row c) or sentiment lexicon (row d) features. Removing this group of features results in more than 2.5 percentage points drop in the F-measure affecting both precision and recall (row b). However, removing any one feature subgroup in this group (e.g., general INLINEFORM0 -grams, general clusters, general embeddings, etc.) results only in slight drop or even increase in the performance (rows b.1–b.4). This indicates that the features in this group capture similar information. Among the domain-specific features, the INLINEFORM1 -grams generalized over domain terms are the most useful. The model trained without these INLINEFORM2 -grams features performs almost one percentage point worse than the model that uses all the features (row c.1). The sentiment lexicon features were not helpful (row d)."
],
"extractive_spans": [],
"free_form_answer": "Because sentiment features extracted the same information as other features.",
"highlighted_evidence": [
" However, removing any one feature subgroup in this group (e.g., general INLINEFORM0 -grams, general clusters, general embeddings, etc.) results only in slight drop or even increase in the performance (rows b.1–b.4). This indicates that the features in this group capture similar information. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We experimented with a number of other existing manually created or automatically generated sentiment and emotion lexicons, such as the NRC Emotion Lexicon BIBREF21 and the NRC Hashtag Emotion Lexicon BIBREF22 (http://saifmohammad.com/ WebPages/lexicons.html), but did not observe any improvement in the cross-validation experiments. None of the sentiment lexicon features were effective in the cross-validation experiments on Task 1; therefore, we did not include them in the final feature set for this task."
],
"extractive_spans": [
"did not observe any improvement in the cross-validation experiments"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experimented with a number of other existing manually created or automatically generated sentiment and emotion lexicons, such as the NRC Emotion Lexicon BIBREF21 and the NRC Hashtag Emotion Lexicon BIBREF22 (http://saifmohammad.com/ WebPages/lexicons.html), but did not observe any improvement in the cross-validation experiments. None of the sentiment lexicon features were effective in the cross-validation experiments on Task 1; therefore, we did not include them in the final feature set for this task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6191da6d180bf31598d7e488c9e4e085460014b0",
"d7f6b12b148cc7d9f61da376da9f5cce619bc09b"
],
"answer": [
{
"evidence": [
"Two labeled datasets were provided to the participants: a training set containing 10,822 tweets and a development set containing 4,845 tweets. These datasets were distributed as lists of tweet IDs, and the participants needed to download the tweets using the provided Python script. However, only about 60–70% of the tweets were accessible at the time of download (May 2017). The training set contained several hundreds of duplicate or near-duplicate messages, which we decided to remove. Near-duplicates were defined as tweets containing mostly the same text but differing in user mentions, punctuation, or other non-essential context. A separate test set of 9,961 tweets was provided without labels at the evaluation period. This set was distributed to the participants, in full, by email. Table TABREF1 shows the number of instances we used for training and testing our model."
],
"extractive_spans": [],
"free_form_answer": "10822, 4845",
"highlighted_evidence": [
"Two labeled datasets were provided to the participants: a training set containing 10,822 tweets and a development set containing 4,845 tweets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Two labeled datasets were provided to the participants: a training set containing 10,822 tweets and a development set containing 4,845 tweets. These datasets were distributed as lists of tweet IDs, and the participants needed to download the tweets using the provided Python script. However, only about 60–70% of the tweets were accessible at the time of download (May 2017). The training set contained several hundreds of duplicate or near-duplicate messages, which we decided to remove. Near-duplicates were defined as tweets containing mostly the same text but differing in user mentions, punctuation, or other non-essential context. A separate test set of 9,961 tweets was provided without labels at the evaluation period. This set was distributed to the participants, in full, by email. Table TABREF1 shows the number of instances we used for training and testing our model.",
"Two labeled datasets were provided to the participants: a training set containing 8,000 tweets and a development set containing 2,260 tweets. As for Task 1, the training and development sets were distributed through tweet IDs and a download script. Around 95% of the tweets were accessible through download. Again, we removed duplicate and near-duplicate messages. A separate test set of 7,513 tweets was provided without labels at the evaluation period. This set was distributed to the participants, in full, by email. Table TABREF7 shows the number of instances we used for training and testing our model."
],
"extractive_spans": [
"training set containing 10,822 tweets and a development set containing 4,845 tweets",
"test set of 9,961 tweets was provided without labels",
"training set containing 8,000 tweets and a development set containing 2,260 tweets",
"test set of 7,513 tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two labeled datasets were provided to the participants: a training set containing 10,822 tweets and a development set containing 4,845 tweets.",
"A separate test set of 9,961 tweets was provided without labels at the evaluation period.",
"Two labeled datasets were provided to the participants: a training set containing 8,000 tweets and a development set containing 2,260 tweets.",
"A separate test set of 7,513 tweets was provided without labels at the evaluation period."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b2bcf52f656b01e61d8a8ace2eff364a1e42d99f",
"d3e39e14554b47fa29ece41b0a2edaff85536bd9"
],
"answer": [
{
"evidence": [
"The official evaluation metric for this task was micro-averaged F-score of the class 1 (intake) and class 2 (possible intake): INLINEFORM0 INLINEFORM1"
],
"extractive_spans": [
"micro-averaged F-score of the class 1 (intake) and class 2 (possible intake)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The official evaluation metric for this task was micro-averaged F-score of the class 1 (intake) and class 2 (possible intake): INLINEFORM0 INLINEFORM1"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The official evaluation metric was the F-score for class 1 (ADR): INLINEFORM0",
"The official evaluation metric for this task was micro-averaged F-score of the class 1 (intake) and class 2 (possible intake): INLINEFORM0 INLINEFORM1"
],
"extractive_spans": [
"F-score for class 1 (ADR)",
"micro-averaged F-score of the class 1 (intake) and class 2 (possible intake)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The official evaluation metric was the F-score for class 1 (ADR): INLINEFORM0",
"The official evaluation metric for this task was micro-averaged F-score of the class 1 (intake) and class 2 (possible intake): INLINEFORM0 INLINEFORM1"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"70cda47c152091ee476d892bf54bae99cd5d9672",
"d81d154288d7240d5f7ec115ef16bf14a4f41f58"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"The results for our three official submissions are presented in Table TABREF39 (rows c.1–c.3). The best results in INLINEFORM0 were obtained with submission 1 (row c.1). The results for submission 2 are the lowest, with F-measure being 3.5 percentage points lower than the result for submission 1 (row c.2). The ensemble classifier (submission 3) shows a slightly worse performance than the best result. However, in the post-competition experiments, we found that larger ensembles (with 7–11 classifiers, each trained on a random sub-sample of the majority class to reduce class imbalance to 1:2) outperform our best single-classifier model by over one percentage point with INLINEFORM1 reaching up to INLINEFORM2 (row d). Our best submission is ranked first among the nine teams participated in this task (rows b.1–b.3).",
"FLOAT SELECTED: Table 4: Task 1: Results for our three official submissions, baselines, and top three teams. Evaluation measures for Task 1 are precision (P), recall (R), and F1-measure (F) for class 1 (ADR).",
"The results for our three official submissions on Task 2 are presented in Table TABREF41 (rows c.1–c.3). The best results in INLINEFORM0 are achieved with submission 1 (row c.1). The results for the other two submissions, submission 2 and submission 3, are quite similar to the results of submission 1 in both precision and recall (rows c.2–c.3). Adding the features from the ADR lexicon and the Pronoun lexicon did not result in performance improvement on the test set. Our best system is ranked third among the nine teams participated in this task (rows b.1–b.3).",
"FLOAT SELECTED: Table 6: Task 2: Results for our three official submissions, baselines, and top three teams. Evaluation measures for Task 2 are micro-averaged P, R, and F1-score for class 1 (intake) and class 2 (possible intake)."
],
"extractive_spans": [],
"free_form_answer": "0.435 on Task1 and 0.673 on Task2.",
"highlighted_evidence": [
"The results for our three official submissions are presented in Table TABREF39 (rows c.1–c.3). The best results in INLINEFORM0 were obtained with submission 1 (row c.1). The results for submission 2 are the lowest, with F-measure being 3.5 percentage points lower than the result for submission 1 (row c.2).",
"FLOAT SELECTED: Table 4: Task 1: Results for our three official submissions, baselines, and top three teams. Evaluation measures for Task 1 are precision (P), recall (R), and F1-measure (F) for class 1 (ADR).",
"The results for our three official submissions on Task 2 are presented in Table TABREF41 (rows c.1–c.3). The best results in INLINEFORM0 are achieved with submission 1 (row c.1).",
"FLOAT SELECTED: Table 6: Task 2: Results for our three official submissions, baselines, and top three teams. Evaluation measures for Task 2 are micro-averaged P, R, and F1-score for class 1 (intake) and class 2 (possible intake)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"32e66d12c81f097236c0dcd6747c0b703a938973",
"b830a4f1b76d98454895bbf93c3d7b675e3997ef"
],
"answer": [
{
"evidence": [
"Twitter-specific features: the number of tokens with all characters in upper case, the number of hashtags, presence of positive and negative emoticons, whether the last token is a positive or negative emoticon, the number of elongated words (e.g., soooo);",
"From these resources, the following domain-specific features were generated:",
"Pronoun Lexicon features: the number of tokens from the Pronoun lexicon matched in the tweet;",
"domain word embeddings: the sum of the domain word embeddings for all tokens in the tweet;",
"domain word clusters: presence of tokens from the domain word clusters."
],
"extractive_spans": [
"INLINEFORM0 -grams generalized over domain terms",
"Pronoun Lexicon features",
"domain word embeddings",
"domain word clusters"
],
"free_form_answer": "",
"highlighted_evidence": [
"Twitter-specific features: the number of tokens with all characters in upper case, the number of hashtags, presence of positive and negative emoticons, whether the last token is a positive or negative emoticon, the number of elongated words (e.g., soooo);",
"From these resources, the following domain-specific features were generated:\n\nINLINEFORM0 -grams generalized over domain terms (or domain generalized INLINEFORM1 -grams, for short): INLINEFORM2 -grams where words or phrases representing a medication (from our medication list) or an adverse drug reaction (from the ADR lexicon) are replaced with INLINEFORM3 and , respectively (e.g., INLINEFORM4 makes me);\n\nPronoun Lexicon features: the number of tokens from the Pronoun lexicon matched in the tweet;\n\ndomain word embeddings: the sum of the domain word embeddings for all tokens in the tweet;\n\ndomain word clusters: presence of tokens from the domain word clusters."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"From these resources, the following domain-specific features were generated:",
"Pronoun Lexicon features: the number of tokens from the Pronoun lexicon matched in the tweet;",
"domain word embeddings: the sum of the domain word embeddings for all tokens in the tweet;",
"domain word clusters: presence of tokens from the domain word clusters."
],
"extractive_spans": [
"INLINEFORM0 -grams generalized over domain terms",
"Pronoun Lexicon features",
"domain word embeddings",
"domain word clusters"
],
"free_form_answer": "",
"highlighted_evidence": [
"From these resources, the following domain-specific features were generated:\n\nINLINEFORM0 -grams generalized over domain terms (or domain generalized INLINEFORM1 -grams, for short): INLINEFORM2 -grams where words or phrases representing a medication (from our medication list) or an adverse drug reaction (from the ADR lexicon) are replaced with INLINEFORM3 and , respectively (e.g., INLINEFORM4 makes me);\n\nPronoun Lexicon features: the number of tokens from the Pronoun lexicon matched in the tweet;\n\ndomain word embeddings: the sum of the domain word embeddings for all tokens in the tweet;\n\ndomain word clusters: presence of tokens from the domain word clusters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"865f996f04756927b47d976b29da771ee6102e80",
"a447935769118a165f95cd6c51433f0299bcb8d4"
],
"answer": [
{
"evidence": [
"We generated features using the sentiment scores provided in the following lexicons: Hu and Liu Lexicon BIBREF17 , Norms of Valence, Arousal, and Dominance BIBREF18 , labMT BIBREF19 , and NRC Emoticon Lexicon BIBREF20 . The first three lexicons were created through manual annotation while the last one, NRC Emoticon Lexicon, was generated automatically from a large collection of tweets with emoticons. The following set of features were calculated separately for each tweet and each lexicon:",
"the number of tokens with INLINEFORM0 ;",
"the total score = INLINEFORM0 ;",
"the maximal score = INLINEFORM0 ;",
"the score of the last token in the tweet."
],
"extractive_spans": [
"the number of tokens with INLINEFORM0",
"the total score = INLINEFORM0",
"the maximal score = INLINEFORM0",
"the score of the last token in the tweet"
],
"free_form_answer": "",
"highlighted_evidence": [
"We generated features using the sentiment scores provided in the following lexicons: Hu and Liu Lexicon BIBREF17 , Norms of Valence, Arousal, and Dominance BIBREF18 , labMT BIBREF19 , and NRC Emoticon Lexicon BIBREF20 .",
"The following set of features were calculated separately for each tweet and each lexicon:\n\nthe number of tokens with INLINEFORM0 ;\n\nthe total score = INLINEFORM0 ;\n\nthe maximal score = INLINEFORM0 ;\n\nthe score of the last token in the tweet."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We generated features using the sentiment scores provided in the following lexicons: Hu and Liu Lexicon BIBREF17 , Norms of Valence, Arousal, and Dominance BIBREF18 , labMT BIBREF19 , and NRC Emoticon Lexicon BIBREF20 . The first three lexicons were created through manual annotation while the last one, NRC Emoticon Lexicon, was generated automatically from a large collection of tweets with emoticons. The following set of features were calculated separately for each tweet and each lexicon:",
"the number of tokens with INLINEFORM0 ;",
"the total score = INLINEFORM0 ;",
"the maximal score = INLINEFORM0 ;",
"the score of the last token in the tweet."
],
"extractive_spans": [
"The following set of features were calculated separately for each tweet and each lexicon:\n\nthe number of tokens with INLINEFORM0 ;\n\nthe total score = INLINEFORM0 ;\n\nthe maximal score = INLINEFORM0 ;\n\nthe score of the last token in the tweet."
],
"free_form_answer": "",
"highlighted_evidence": [
"We generated features using the sentiment scores provided in the following lexicons: Hu and Liu Lexicon BIBREF17 , Norms of Valence, Arousal, and Dominance BIBREF18 , labMT BIBREF19 , and NRC Emoticon Lexicon BIBREF20 . The first three lexicons were created through manual annotation while the last one, NRC Emoticon Lexicon, was generated automatically from a large collection of tweets with emoticons. The following set of features were calculated separately for each tweet and each lexicon:\n\nthe number of tokens with INLINEFORM0 ;\n\nthe total score = INLINEFORM0 ;\n\nthe maximal score = INLINEFORM0 ;\n\nthe score of the last token in the tweet."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"380e3fc9f453a3d4d526e6ac59d26a7faade94ba",
"3d6bb4cf6a47991d6af06091d866131422097302"
],
"answer": [
{
"evidence": [
"The following surface-form features were used:",
"INLINEFORM0 -grams: word INLINEFORM1 -grams (contiguous sequences of INLINEFORM2 tokens), non-contiguous word INLINEFORM3 -grams ( INLINEFORM4 -grams with one token replaced by *), character INLINEFORM5 -grams (contiguous sequences of INLINEFORM6 characters), unigram stems obtained with the Porter stemming algorithm;",
"General-domain word embeddings:",
"dense word representations generated with word2vec on ten million English-language tweets, summed over all tokens in the tweet,",
"word embeddings distributed as part of ConceptNet 5.5 BIBREF15 , summed over all tokens in the tweet;",
"General-domain word clusters: presence of tokens from the word clusters generated with the Brown clustering algorithm on 56 million English-language tweets; BIBREF11",
"Negation: presence of simple negators (e.g., not, never); negation also affects the INLINEFORM0 -gram features—a term INLINEFORM1 becomes INLINEFORM2 if it occurs after a negator and before a punctuation mark;",
"Twitter-specific features: the number of tokens with all characters in upper case, the number of hashtags, presence of positive and negative emoticons, whether the last token is a positive or negative emoticon, the number of elongated words (e.g., soooo);",
"Punctuation: presence of exclamation and question marks, whether the last token contains an exclamation or question mark."
],
"extractive_spans": [
"INLINEFORM0 -grams",
"General-domain word embeddings",
"General-domain word clusters",
"Negation: presence of simple negators",
"the number of tokens with all characters in upper case, the number of hashtags, presence of positive and negative emoticons, whether the last token is a positive or negative emoticon, the number of elongated words",
"presence of exclamation and question marks, whether the last token contains an exclamation or question mark"
],
"free_form_answer": "",
"highlighted_evidence": [
"The following surface-form features were used:\n\nINLINEFORM0 -grams: word INLINEFORM1 -grams (contiguous sequences of INLINEFORM2 tokens), non-contiguous word INLINEFORM3 -grams ( INLINEFORM4 -grams with one token replaced by *), character INLINEFORM5 -grams (contiguous sequences of INLINEFORM6 characters), unigram stems obtained with the Porter stemming algorithm;\n\nGeneral-domain word embeddings:\n\ndense word representations generated with word2vec on ten million English-language tweets, summed over all tokens in the tweet,\n\nword embeddings distributed as part of ConceptNet 5.5 BIBREF15 , summed over all tokens in the tweet;\n\nGeneral-domain word clusters: presence of tokens from the word clusters generated with the Brown clustering algorithm on 56 million English-language tweets; BIBREF11\n\nNegation: presence of simple negators (e.g., not, never); negation also affects the INLINEFORM0 -gram features—a term INLINEFORM1 becomes INLINEFORM2 if it occurs after a negator and before a punctuation mark;\n\nTwitter-specific features: the number of tokens with all characters in upper case, the number of hashtags, presence of positive and negative emoticons, whether the last token is a positive or negative emoticon, the number of elongated words (e.g., soooo);\n\nPunctuation: presence of exclamation and question marks, whether the last token contains an exclamation or question mark."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The following surface-form features were used:",
"INLINEFORM0 -grams: word INLINEFORM1 -grams (contiguous sequences of INLINEFORM2 tokens), non-contiguous word INLINEFORM3 -grams ( INLINEFORM4 -grams with one token replaced by *), character INLINEFORM5 -grams (contiguous sequences of INLINEFORM6 characters), unigram stems obtained with the Porter stemming algorithm;",
"General-domain word embeddings:",
"dense word representations generated with word2vec on ten million English-language tweets, summed over all tokens in the tweet,",
"word embeddings distributed as part of ConceptNet 5.5 BIBREF15 , summed over all tokens in the tweet;",
"General-domain word clusters: presence of tokens from the word clusters generated with the Brown clustering algorithm on 56 million English-language tweets; BIBREF11",
"Negation: presence of simple negators (e.g., not, never); negation also affects the INLINEFORM0 -gram features—a term INLINEFORM1 becomes INLINEFORM2 if it occurs after a negator and before a punctuation mark;",
"Twitter-specific features: the number of tokens with all characters in upper case, the number of hashtags, presence of positive and negative emoticons, whether the last token is a positive or negative emoticon, the number of elongated words (e.g., soooo);",
"Punctuation: presence of exclamation and question marks, whether the last token contains an exclamation or question mark."
],
"extractive_spans": [
"INLINEFORM0 -grams",
"General-domain word embeddings",
"General-domain word clusters",
"Negation",
"Twitter-specific features",
"Punctuation"
],
"free_form_answer": "",
"highlighted_evidence": [
"The following surface-form features were used:\n\nINLINEFORM0 -grams: word INLINEFORM1 -grams (contiguous sequences of INLINEFORM2 tokens), non-contiguous word INLINEFORM3 -grams ( INLINEFORM4 -grams with one token replaced by *), character INLINEFORM5 -grams (contiguous sequences of INLINEFORM6 characters), unigram stems obtained with the Porter stemming algorithm;\n\nGeneral-domain word embeddings:\n\ndense word representations generated with word2vec on ten million English-language tweets, summed over all tokens in the tweet,\n\nword embeddings distributed as part of ConceptNet 5.5 BIBREF15 , summed over all tokens in the tweet;\n\nGeneral-domain word clusters: presence of tokens from the word clusters generated with the Brown clustering algorithm on 56 million English-language tweets; BIBREF11\n\nNegation: presence of simple negators (e.g., not, never); negation also affects the INLINEFORM0 -gram features—a term INLINEFORM1 becomes INLINEFORM2 if it occurs after a negator and before a punctuation mark;\n\nTwitter-specific features: the number of tokens with all characters in upper case, the number of hashtags, presence of positive and negative emoticons, whether the last token is a positive or negative emoticon, the number of elongated words (e.g., soooo);\n\nPunctuation: presence of exclamation and question marks, whether the last token contains an exclamation or question mark."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
"",
""
],
"question": [
"why do they think sentiment features do not result in improvement?",
"what was the size of the datasets?",
"what were the evaluation metrics?",
"what were their results on both tasks?",
"what domain-specific features did they train on?",
"what are the sentiment features used?",
"what surface-form features were used?"
],
"question_id": [
"800fcd8b08d36c5276f9e5e1013208d41b46de59",
"cdbbba22e62bc9402aea74ac5960503f59e984ff",
"301a453abaa3bc15976817fefce7a41f3b779907",
"f3673f6375f065014e8e4bb8c7adf54c1c7d7862",
"0bd3bea892c34a3820e98c4a42cdeda03753146b",
"8cf5abf0126f19253930478b02f0839af28e4093",
"d211a37830c59aeab4970fdb2e03d9b7368b421c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: The number of available instances in the training, development, and test sets for Task 1.",
"Table 2: The number of available instances in the training, development, and test sets for Task 2.",
"Table 3: Feature sets and parameters for the three official submissions for Task 1 and Task 2. Xspecifies the features included in the classification model; ’-’ specifies the features not included.",
"Table 4: Task 1: Results for our three official submissions, baselines, and top three teams. Evaluation measures for Task 1 are precision (P), recall (R), and F1-measure (F) for class 1 (ADR).",
"Table 5: Task 1: Results of our best system (submission 1) on the test set when one of the feature groups is removed.",
"Table 6: Task 2: Results for our three official submissions, baselines, and top three teams. Evaluation measures for Task 2 are micro-averaged P, R, and F1-score for class 1 (intake) and class 2 (possible intake).",
"Table 7: Task 2: Results of our best system (submission 1) on the test set when one of the feature groups is removed."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"9-Table7-1.png"
]
} | [
"why do they think sentiment features do not result in improvement?",
"what was the size of the datasets?",
"what were their results on both tasks?"
] | [
[
"1805.04558-Results and Discussion-9"
],
[
"1805.04558-Task and Data Description-5",
"1805.04558-Task and Data Description-11"
],
[
"1805.04558-9-Table6-1.png",
"1805.04558-7-Table4-1.png",
"1805.04558-Results and Discussion-7",
"1805.04558-Results and Discussion-1"
]
] | [
"Because sentiment features extracted the same information as other features.",
"10822, 4845",
"0.435 on Task1 and 0.673 on Task2."
] | 218 |
1911.03324 | Transforming Wikipedia into Augmented Data for Query-Focused Summarization | The manual construction of a query-focused summarization corpus is costly and timeconsuming. The limited size of existing datasets renders training data-driven summarization models challenging. In this paper, we use Wikipedia to automatically collect a large query-focused summarization dataset (named as WIKIREF) of more than 280,000 examples, which can serve as a means of data augmentation. Moreover, we develop a query-focused summarization model based on BERT to extract summaries from the documents. Experimental results on three DUC benchmarks show that the model pre-trained on WIKIREF has already achieved reasonable performance. After fine-tuning on the specific datasets, the model with data augmentation outperforms the state of the art on the benchmarks. | {
"paragraphs": [
[
"Query-focused summarization aims to create a brief, well-organized and informative summary for a document with specifications described in the query. Various unsupervised methods BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5 and supervised methods BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 have been proposed for the purpose. The task is first introduced in DUC 2005 BIBREF11, with human annotated data released until 2007. The DUC benchmark datasets are of high quality. But the limited size renders training query-focused summarization models challenging, especially for the data-driven methods. Meanwhile, manually constructing a large-scale query-focused summarization dataset is quite costly and time-consuming.",
"In order to advance query-focused summarization with limited data, we improve the summarization model with data augmentation. Specifically, we transform Wikipedia into a large-scale query-focused summarization dataset (named as WikiRef). To automatically construct query-focused summarization examples using Wikipedia, the statements' citations in Wikipedia articles as pivots to align the queries and documents. Figure FIGREF1 shows an example that is constructed by the proposed method. We first take the highlighted statement as the summary. Its supporting citation is expected to provide an adequate context to derive the statement, thus can serve as the source document. On the other hand, the section titles give a hint about which aspect of the document is the summary's focus. Therefore, we use the article title and the section titles of the statement to form the query. Given that Wikipedia is the largest online encyclopedia, we can automatically construct massive query-focused summarization examples.",
"Most systems on the DUC benchmark are extractive summarization models. These systems are usually decomposed into two subtasks, i.e., sentence scoring and sentence selection. Sentence scoring aims to measure query relevance and sentence salience for each sentence, which mainly adopts feature-based methods BIBREF0, BIBREF7, BIBREF3. Sentence selection is used to generate the final summary with the minimal redundancy by selecting highest ranking sentences one by one.",
"In this paper, we develop a BERT-based model for query-focused extractive summarization. The model takes the concatenation of the query and the document as input. The query-sentence and sentence-sentence relationships are jointly modeled by the self-attention mechanism BIBREF12. The model is fine-tuned to utilize the general language representations of BERT BIBREF13.",
"Experimental results on three DUC benchmarks show that the model achieves competitive performance by fine-tuning and outperforms previous state-of-the-art summarization models with data augmentation. Meanwhile, the results demonstrate that we can use WikiRef as a large-scale dataset to advance query-focused summarization research."
],
[
"A wide range of unsupervised approaches have been proposed for extractive summarization. Surface features, such as n-gram overlapping, term frequency, document frequency, sentence positions BIBREF10, sentence length BIBREF9, and TF-IDF cosine similarity BIBREF3. Maximum Marginal Relevance (MMR) BIBREF0 greedily selects sentences and considered the trade-off between saliency and redundancy. BIBREF2 ilp treat sentence selection as an optimization problem and solve it using Integer Linear Programming (ILP). BIBREF14 lin2010multi propose using submodular functions to maximize an objective function that considers the trade-off between coverage and redundancy terms.",
"Graph-based models make use of various inter-sentence and query-sentence relationships are also widely applied in the extractive summarization area. LexRank BIBREF1 scores sentences in a graph of sentence similarities. BIBREF3 qfsgraph apply manifold ranking to make use of the sentence-to-sentence and sentence-to-document relationships and the sentence-to-query relationships. We also model the above mentioned relationships, except for the cross-document relationships, like a graph at token level, which are aggregated into distributed representations of sentences.",
"Supervised methods with machine learning techniques BIBREF6, BIBREF7, BIBREF8 are also used to better estimate sentence importance. In recent years, few deep neural networks based approaches have been used for extractive document summarization. BIBREF9 cao-attsum propose an attention-base model which jointly handles sentence salience ranking and query relevance ranking. It automatically generates distributed representations for sentences as well as the document. To leverage contextual relations for sentence modeling, BIBREF10 Ren-crsum propose CRSum that learns sentence representations and context representations jointly with a two-level attention mechanism. The small data size is the main obstacle of developing neural models for query-focused summarization."
],
[
"Given a query $\\mathcal {Q}=(q_1, q_2,...,q_m)$ of $m$ token sequences and a document $\\mathcal {D}=(s_1, s_2, ..., s_n)$ containing $n$ sentences, extractive query-focused summarization aims to extract a salient subset of $ \\mathcal {D}$ that is related to the query as the output summary $\\mathcal {\\hat{S}}=\\left\\lbrace \\hat{s_i}\\vert \\hat{s_i} \\in \\mathcal {D}\\right\\rbrace $. In general, the extrative summarization task can be tackled by assigning each sentence a label to indicate the inclusion in the summary or estimating scores for ranking sentences, namely sentence classification or sentence regression.",
"In sentence classification, the probability of putting sentence $s_i$ in the output summary is $P\\left(s_i\\vert \\mathcal {Q},\\mathcal {D}\\right)$. We factorize the probability of predicting $\\hat{\\mathcal {S}}$ as the output summary $P(\\hat{\\mathcal {S}}\\vert \\mathcal {Q},\\mathcal {D})$ of document $\\mathcal {D}$ given query $\\mathcal {Q}$ as: P(SQ,D)=siS P(siQ,D) )",
"In sentence regression, extractive summarization is achieved via sentence scoring and sentence selection. The former scores $\\textrm {r}(s_i\\vert \\mathcal {Q},\\mathcal {D})$ a sentence $s_i$ by considering its relevance to the query $\\mathcal {Q}$ and its salience to the document $\\mathcal {D}$. The latter generates a summary by ranking sentences under certain constraints, e.g., the number of sentences and the length of the summary."
],
[
"Figure FIGREF2 gives an overview of our BERT-based extractive query-focused summmarization model. For each sentence, we use BERT to encode its query relevance, document context and salient meanings into a vector representation. Then the vector representations are fed into a simple output layer to predict the label or estimate the score of each sentence."
],
[
"The query $\\mathcal {Q}$ and document $\\mathcal {D}$ are flattened and packed as a token sequence as input. Following the standard practice of BERT, the input representation of each token is constructed by summing the corresponding token, segmentation and position embeddings. Token embeddings project the one-hot input tokens into dense vector representations. Two segment embeddings $\\mathbf {E}_Q$ and $\\mathbf {E}_D$ are used to indicate query and document tokens respectively. Position embeddings indicate the absolute position of each token in the input sequence. To embody the hierarchical structure of the query in a sequence, we insert a [L#] token before the #-th query token sequence. For each sentence, we insert a [CLS] token at the beginning and a [SEP] token at the end to draw a clear sentence boundary."
],
[
"In this layer, we use BERT BIBREF13, a deep Transformer BIBREF12 consisting of stacked self-attention layers, as encoder to aggregate query, intra-sentence and inter-sentence information into sentence representations. Given the packed input embeddings $\\mathbf {H}^0=\\left[\\mathbf {x}_1,...,\\mathbf {x}_{|x|}\\right]$, we apply an $L$-layer Transformer to encode the input: Hl=Transformerl(Hl-1) where $l\\in \\left[1,L\\right]$. At last, we use the hidden vector $\\mathbf {h}_i^L$ of the $i$-th [CLS] token as the contextualized representation of the subsequent sentence."
],
[
"The output layer is used to score sentences for extractive query-focused summarization. Given $\\mathbf {h}_i^L\\in \\mathbb {R}^d$ is the vector representation for the i-th sentence. When the extracive summarization is carried out through sentence classification , the output layer is a linear layer followed by a sigmoid function: P(siQ,D)=sigmoid(WchiL+bc) where $\\mathbf {W}_c$ and $\\mathbf {b}_c$ are trainable parameters. The output is the probability of including the i-th sentence in the summary.",
"In the setting of sentence regression, a linear layer without activation function is used to estimate the score of a sentence: r(siQ,D)=WrhiL+br where $\\mathbf {W}_r$ and $\\mathbf {b}_r$ are trainable parameters."
],
[
"The training objective of sentence classification is to minimize the binary cross-entropy loss:",
"where $y_i\\in \\lbrace 0,1\\rbrace $ is the oracle label of the i-th sentence.",
"The training objective of sentence regression is to minimize the mean square error between the estimated score and the oracle score: L=1nin (r(siQ,D) - f(siS*))2 where $\\mathcal {S}^*$ is the oracle summary and $\\textrm {f}(s_i\\vert \\mathcal {S}^*)$ is the oracle score of the i-th sentence."
],
[
"We automatically construct a query-focused summarization dataset (named as WikiRef) using Wikipedia and corresponding reference web pages. In the following sections, we will first elaborate the creation process. Then we will analyze the queries, documents and summaries quantitatively and qualitatively."
],
[
"We follow two steps to collect and process the data: (1) we crawl English Wikipedia and the references of the Wikipedia articles and parse the HTML sources into plain text; (2) we preprocess the plain text and filter the examples through a set of fine-grained rules."
],
[
"To maintain the highest standards possible, most statements in Wikipedia are attributed to reliable, published sources that can be accessed through hyperlinks. In the first step, we parse the English Wikipedia database dump into plain text and save statements with citations. If a statement is attributed multiple citations, only the first citation is used. We also limit the sources of the citations to four types, namely web pages, newspaper articles, press and press release. A statement may contain more than one sentence.",
"The statement can be seen as a summary of the supporting citations from a certain aspect. Therefore, we can take the body of the citation as the document and treat the statement as the summary. Meanwhile, the section titles of a statement could be used as a natural coarse-grained query to specify the focused aspects. Then we can form a complete query-focused summarization example by referring to the statement, attributed citation and section titles along with the article title as summary, document and query respectively. It is worth noticing that the queries in WikiRef dataset are thus keywords, instead of natural language as in other query-focused summarization datasets.",
"We show an example in Figure FIGREF8 to illustrate the raw data collection process. The associated query, summary and the document are highlighted in colors in the diagram. At last, we have collected more than 2,000,000 English examples in total after the raw data collection step."
],
[
"To make sure the statement is a plausible summary of the cited document, we process and filter the examples through a set of fine-grained rules. The text is tokenized and lemmatized using Spacy. First, we calculate the unigram recall of the document, where only the non-stop words are considered. We throw out the example whose score is lower than the threshold. Here we set the threshold to 0.5 empirically, which means at least more than half of the summary tokens should be in the document. Next, we filter the examples with multiple length and sentence number constraints. To set reasonable thresholds, we use the statistics of the examples whose documents contain no more than 1,000 tokens. The 5th and the 95th percentiles are used as low and high thresholds of each constraint. Finally, in order to ensure generating the summary with the given document is feasible, we filter the examples through extractive oracle score. The extractive oracle is obtained through a greedy search over document sentence combinations with maximum 5 sentences. Here we adopt Rouge-2 recall as scoring metric and only the examples with an oracle score higher than 0.2 are kept. After running through the above rules, we have the WikiRef dataset with 280,724 examples. We randomly split the data into training, development and test sets and ensure no document overlapping across splits."
],
[
"Table TABREF11 show statistics of the WikiRef dataset. The development set and the test set contains 12,000 examples each. The statistics across splits are evenly distributed and no bias observed. The numerous Wikipedia articles cover a wide range of topics. The average depth of the query is 2.5 with article titles are considered. Since the query are keywords in WikiRef, it is relatively shorter than the natural language queries with an average length of 6.7 tokens. Most summaries are composed of one or two sentences. And the document contains 18.8 sentences on average."
],
[
"We also conduct human evaluation on 60 WikiRef samples to examine the quality of the automatically constructed data. We partition the examples into four bins according to the oracle score and then sample 15 examples from each bin. Each example is scored in two criteria: (1) “Query Relatedness” examines to what extent the summary is a good response to the query and (2) “Doc Salience” examines to what extent the summary conveys salient document content given the query.",
"Table TABREF15 shows the evaluation result. We can see that most of the time the summaries are good responses to the queries across bins. Since we take section titles as query and the statement under the section as summary, the high evaluation score can be attributed to Wikipedia pages of high quality. When the oracle scores are getting higher, the summaries continue to better convey the salient document content specified by the query. On the other hand, we notice that sometimes the summaries only contain a proportion of salient document content. It is reasonable since reference articles may present several aspects related to topic. But we can see that it is mitigated when the oracle scores are high on the WikiRef dataset."
],
[
"In this section, we present experimental results of the proposed model on the DUC 2005, 2006, 2007 datasets with and without data augmentation. We also carry out benchmark tests on WikiRef as a standard query-focused summarization dataset."
],
[
"We use the uncased version of BERT-base for fine-tuning. The max sequence length is set to 512. We use Adam optimizer BIBREF15 with learning rate of 3e-5, $\\beta _1$ = 0.9, $\\beta _2$ = 0.999, L2 weight decay of 0.01, and linear decay of the learning rate. We split long documents into multiple windows with a stride of 100. Therefore, a sentence can appear in more than one windows. To avoid making predictions on an incomplete sentence or with suboptimal context, we score a sentence only when it is completely included and its context is maximally covered. The training epoch and batch size are selected from {3, 4}, and {24, 32}, respectively."
],
[
"For summary evaluation, we use Rouge BIBREF16 as our automatic evaluation metric. Rouge is the official metrics of the DUC benchmarks and widely used for summarization evaluation. Rouge-N measures the summary quality by counting overlapping N-grams with respect to the reference summary. Whereas Rouge-L measures the longest common subsequence. To compare with previous work on DUC datasets, we report the Rouge-1 and Rouge-2 recall computed with official parameters that limits the length to 250 words. On the WikiRef dataset, we report Rouge-1, Rouge-2 and Rouge-L scores."
],
[
"We first train our extractive summarization model on the WikiRef dataset through sentence classification. And we need the ground-truth binary labels of sentences to be extracted. However, we can not find the sentences that exactly match the reference summary for most examples. To solve this problem, we use a greedy algorithm similar to BIBREF17 zhou-etal-2018-neural-document to find an oracle summary with document sentences that maximizes the Rouge-2 F1 score with respect to the reference summary. Given a document of $n$ sentences, we greedily enumerate the combination of sentences. For documents that contain numerous sentences, searching for an global optimal combination of sentences is computationally expensive. Meanwhile it is unnecessary since the reference summaries contain no more than four sentences. So we stop searching when no combination with $i$ sentences scores higher than the best the combination with $i$-1 sentences.",
"We also train an extractive summarization model through sentence regression. For each sentence, the oracle score for training is the Rouge-2 F1 score.",
"During inference, we rank sentences according to their predicted scores. Then we append the sentence one by one to form the summary if it is not redundant and scores higher than a threshold. We skip the redundant sentences that contain overlapping trigrams with respect to the current output summary as in BIBREF18 ft-bert-extractive. The threshold is searched on the development set to obtain the highest Rouge-2 F1 score."
],
[
"We apply the proposed model and the following baselines:"
],
[
"outputs all sentences of the document as summary."
],
[
"is a straightforward summarization baseline that selects the leading sentences. We take the first two sentences for that the groundtruth summary contains 1.4 sentences on average."
],
[
"uses the same structure as the BERT with randomly initialized parameters."
],
[
"The results are shown in Table TABREF16. Our proposed model with classification output layer achieves 18.81 Rouge-2 score on the WikiRef test set. On average, the output summary consists of 1.8 sentences. Lead is a strong unsupervised baseline that achieves comparable results with the supervised neural baseline Transformer. Even though WikiRef is a large-scale dataset, training models with parameters initialized from BERT still significantly outperforms Transformer. The model trained using sentence regression performs worse than the one supervised by sentence classification. It is in accordance with oracle labels and scores. We observe a performance drop when generating summaries without queries (see “-Query”). It proves that the summaries in WikiRef are indeed query-focused."
],
[
"DUC 2005-2007 are query-focused multi-document summarization benchmarks. The documents are from the news domain and grouped into clusters according to their topics. And the summary is required to be no longer than 250 tokens. Table TABREF29 shows statistics of the DUC datasets. Each document cluster has several reference summaries generated by humans and a query that specifies the focused aspects and desired information. We show an example query from the DUC 2006 dataset below:",
"EgyptAir Flight 990?",
"What caused the crash of EgyptAir Flight 990?",
"Include evidence, theories and speculation.",
"The first narrative is usually a title and followed by several natural language questions or narratives."
],
[
"We follow standard practice to alternately train our model on two years of data and test on the third. The oracle scores used in model training are Rouge-2 recall of sentences. In this paper, we score a sentence by only considering the query and the its document. Then we rank sentences according to the estimated scores across documents within a cluster. For each cluster, we fetch the top-ranked sentences iteratively into the output summary with redundancy constraint met. A sentence is redundant if more than half of its bigrams appear in the current output summary.",
"The WikiRef dataset is used as augmentation data for DUC datasets in two steps. We first fine-tune BERT on the WikiRef dataset. Subsequently, we use the DUC datasets to further fine-tune parameters of the best pre-trained model."
],
[
"We compare our method with several previous query-focused summarization models, of which the AttSum is the state-of-the-art model:"
],
[
"is a simple baseline that selects leading sentences to form a summary."
],
[
"is an unsupervised method that ranks sentences according to its TF-IDF cosine similarity to the query."
],
[
"BIBREF7 is a supervised baseline that extracts both query-dependent and query-independent features and then using Support Vector Regression to learn the weights of features."
],
[
"BIBREF9 is a neural attention summarization system that tackles query relevance ranking and sentence salience ranking jointly."
],
[
"BIBREF10 is the contextual relation-based neural summarization system that improves sentence scoring by utilizing contextual relations among sentences."
],
[
"Table TABREF22 shows the Rouge scores of comparison methods and our proposed method. Fine-tuning BERT on DUC datasets alone outperforms previous best performing summarization systems on DUC 2005 and 2006 and obtains comparable results on DUC 2007. Our data augmentation method further advances the model to a new state of the art on all DUC benchmarks. We also notice that models pre-trained on the augmentation data achieve reasonable performance without further fine-tuning model parameters. It implies the WikiRef dataset reveals useful knowledge shared by the DUC datatset. We pre-train models on augmentation data under both sentence classification and sentence regression supervision. The experimental results show that both supervision types yield similar performance."
],
[
"To better understand the improvement brought by augmentation data, we conduct a human evaluation of the output summaries before and after data augmentation. We sample 30 output summaries of the DUC 2006 dataset for analysis. And we find that the model augmented by the WikiRef dataset produces more query-related summaries on 23 examples. Meanwhile,the extracted sentences are usually less redundant. We attribute these benefits to the improved coverage and query-focused extraction brought by the large-scale augmentation data."
],
[
"To further verify the effectiveness of our data augmentation method, we first pre-train models on the WikiRef dataset and then we vary the number of golden examples for fine-tuning. Here we take the DUC 2007 dataset as test set and use DUC 2005 and 2006 as training set. In Figure FIGREF33, we present Rouge-2 scores of fine-tuning BERT on DUC datasets for comparison. Either using DUC 2005 alone or DUC 2006 alone yields inferior performance than using both. Our proposed data augmentation method can obtain competitive results using only no more than 30 golden examples and outperform BERT fine-tuning thereafter."
],
[
"The improvement introduced by using the WikiRef dataset as augmentation data is traceable. At first, the document in the DUC datasets are news articles and we crawl newspaper webpages as one source of the WikiRef documents. Secondly, queries in the WikiRef dataset are hierarchical that specify the aspects it focuses on gradually. This is similar to the DUC datasets that queries are composed of several narratives to specify the desired information. The key difference is that queries in the WikiRef dataset are composed of key words, while the ones in the DUC datasets are mostly natural language. At last, we construct the WikiRef dataset to be a large-scale query-focused summarization dataset that contains more than 280,000 examples. In comparison, the DUC datasets contain only 145 clusters with around 10,000 documents. Therefore, query relevance and sentence context can be better modeled using data-driven neural methods with WikiRef. And it provides a better starting point for fine-tuning on the DUC datasets."
],
[
"In this paper, we propose to automatically construct a large-scale query-focused summarization dataset WikiRef using Wikipedia articles and the corresponding references. The statements, supporting citations and article title along with section titles of the statements are used as summaries, documents and queries respectively. The WikiRef dataset serves as a means of data augmentation on DUC benchmarks. It also is shown to be a eligible query-focused summarization benchmark. Moreover, we develop a BERT-based extractive query-focused summarization model to extract summaries from the documents. The model makes use of the query-sentence relationships and sentence-sentence relationships jointly to score sentences. The results on DUC benchmarks show that our model with data augmentation outperforms the state-of-the-art. As for future work, we would like to model relationships among documents for multi-document summarization."
]
],
"section_name": [
"Introduction",
"Related Work",
"Problem Formulation",
"Query-Focused Summarization Model",
"Query-Focused Summarization Model ::: Input Representation",
"Query-Focused Summarization Model ::: BERT Encoding Layer",
"Query-Focused Summarization Model ::: Output Layer",
"Query-Focused Summarization Model ::: Training Objective",
"WikiRef: Transforming Wikipedia into Augmented Data",
"WikiRef: Transforming Wikipedia into Augmented Data ::: Data Creation",
"WikiRef: Transforming Wikipedia into Augmented Data ::: Data Creation ::: Raw Data Collection",
"WikiRef: Transforming Wikipedia into Augmented Data ::: Data Creation ::: Data Curation",
"WikiRef: Transforming Wikipedia into Augmented Data ::: Data Statistics",
"WikiRef: Transforming Wikipedia into Augmented Data ::: Human Evaluation",
"Experiments",
"Experiments ::: Implementation Details",
"Experiments ::: Evaluation Metrics",
"Experiments ::: Experiments on WikiRef ::: Settings",
"Experiments ::: Experiments on WikiRef ::: Baselines",
"Experiments ::: Experiments on WikiRef ::: Baselines ::: All",
"Experiments ::: Experiments on WikiRef ::: Baselines ::: Lead",
"Experiments ::: Experiments on WikiRef ::: Baselines ::: Transformer",
"Experiments ::: Experiments on WikiRef ::: Results",
"Experiments ::: Experiments on DUC Datasets",
"Experiments ::: Experiments on DUC Datasets ::: Settings",
"Experiments ::: Experiments on DUC Datasets ::: Baselines",
"Experiments ::: Experiments on DUC Datasets ::: Baselines ::: Lead",
"Experiments ::: Experiments on DUC Datasets ::: Baselines ::: Query-Sim",
"Experiments ::: Experiments on DUC Datasets ::: Baselines ::: Svr",
"Experiments ::: Experiments on DUC Datasets ::: Baselines ::: AttSum",
"Experiments ::: Experiments on DUC Datasets ::: Baselines ::: CrSum",
"Experiments ::: Experiments on DUC Datasets ::: Results",
"Experiments ::: Experiments on DUC Datasets ::: Human Evaluation",
"Experiments ::: Experiments on DUC Datasets ::: Ablation Study",
"Experiments ::: Discussion",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"024d8518c06183211eafc2cae8977166e3b27f18",
"11da8856ce1a861d472379115d19b38aedc68625"
],
"answer": [
{
"evidence": [
"In this paper, we develop a BERT-based model for query-focused extractive summarization. The model takes the concatenation of the query and the document as input. The query-sentence and sentence-sentence relationships are jointly modeled by the self-attention mechanism BIBREF12. The model is fine-tuned to utilize the general language representations of BERT BIBREF13."
],
"extractive_spans": [
"The model takes the concatenation of the query and the document as input. The query-sentence and sentence-sentence relationships are jointly modeled by the self-attention mechanism BIBREF12. The model is fine-tuned to utilize the general language representations of BERT BIBREF13."
],
"free_form_answer": "",
"highlighted_evidence": [
"The model takes the concatenation of the query and the document as input. The query-sentence and sentence-sentence relationships are jointly modeled by the self-attention mechanism BIBREF12. The model is fine-tuned to utilize the general language representations of BERT BIBREF13."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we develop a BERT-based model for query-focused extractive summarization. The model takes the concatenation of the query and the document as input. The query-sentence and sentence-sentence relationships are jointly modeled by the self-attention mechanism BIBREF12. The model is fine-tuned to utilize the general language representations of BERT BIBREF13.",
"Figure FIGREF2 gives an overview of our BERT-based extractive query-focused summmarization model. For each sentence, we use BERT to encode its query relevance, document context and salient meanings into a vector representation. Then the vector representations are fed into a simple output layer to predict the label or estimate the score of each sentence."
],
"extractive_spans": [],
"free_form_answer": "It takes the query and document as input and encodes the query relevance, document context and salient meaning to be passed to the output layer to make the prediction.",
"highlighted_evidence": [
"In this paper, we develop a BERT-based model for query-focused extractive summarization. The model takes the concatenation of the query and the document as input. The query-sentence and sentence-sentence relationships are jointly modeled by the self-attention mechanism BIBREF12. The model is fine-tuned to utilize the general language representations of BERT BIBREF13.",
"Figure FIGREF2 gives an overview of our BERT-based extractive query-focused summmarization model. For each sentence, we use BERT to encode its query relevance, document context and salient meanings into a vector representation. Then the vector representations are fed into a simple output layer to predict the label or estimate the score of each sentence."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"394db826067bc6f28ca1b807710c81e838d6e571",
"c98c2b8259dd9bcad08ab74130a4a58cffc64ee0"
],
"answer": [
{
"evidence": [
"In order to advance query-focused summarization with limited data, we improve the summarization model with data augmentation. Specifically, we transform Wikipedia into a large-scale query-focused summarization dataset (named as WikiRef). To automatically construct query-focused summarization examples using Wikipedia, the statements' citations in Wikipedia articles as pivots to align the queries and documents. Figure FIGREF1 shows an example that is constructed by the proposed method. We first take the highlighted statement as the summary. Its supporting citation is expected to provide an adequate context to derive the statement, thus can serve as the source document. On the other hand, the section titles give a hint about which aspect of the document is the summary's focus. Therefore, we use the article title and the section titles of the statement to form the query. Given that Wikipedia is the largest online encyclopedia, we can automatically construct massive query-focused summarization examples."
],
"extractive_spans": [
"To automatically construct query-focused summarization examples using Wikipedia, the statements' citations in Wikipedia articles as pivots to align the queries and documents. Figure FIGREF1 shows an example that is constructed by the proposed method. We first take the highlighted statement as the summary. Its supporting citation is expected to provide an adequate context to derive the statement, thus can serve as the source document. On the other hand, the section titles give a hint about which aspect of the document is the summary's focus. Therefore, we use the article title and the section titles of the statement to form the query. "
],
"free_form_answer": "",
"highlighted_evidence": [
"To automatically construct query-focused summarization examples using Wikipedia, the statements' citations in Wikipedia articles as pivots to align the queries and documents. Figure FIGREF1 shows an example that is constructed by the proposed method. We first take the highlighted statement as the summary. Its supporting citation is expected to provide an adequate context to derive the statement, thus can serve as the source document. On the other hand, the section titles give a hint about which aspect of the document is the summary's focus. Therefore, we use the article title and the section titles of the statement to form the query. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to advance query-focused summarization with limited data, we improve the summarization model with data augmentation. Specifically, we transform Wikipedia into a large-scale query-focused summarization dataset (named as WikiRef). To automatically construct query-focused summarization examples using Wikipedia, the statements' citations in Wikipedia articles as pivots to align the queries and documents. Figure FIGREF1 shows an example that is constructed by the proposed method. We first take the highlighted statement as the summary. Its supporting citation is expected to provide an adequate context to derive the statement, thus can serve as the source document. On the other hand, the section titles give a hint about which aspect of the document is the summary's focus. Therefore, we use the article title and the section titles of the statement to form the query. Given that Wikipedia is the largest online encyclopedia, we can automatically construct massive query-focused summarization examples."
],
"extractive_spans": [],
"free_form_answer": "They use the article and section titles to build a query and use the body text of citation as the summary.",
"highlighted_evidence": [
"To automatically construct query-focused summarization examples using Wikipedia, the statements' citations in Wikipedia articles as pivots to align the queries and documents. Figure FIGREF1 shows an example that is constructed by the proposed method. We first take the highlighted statement as the summary. Its supporting citation is expected to provide an adequate context to derive the statement, thus can serve as the source document. On the other hand, the section titles give a hint about which aspect of the document is the summary's focus. Therefore, we use the article title and the section titles of the statement to form the query. Given that Wikipedia is the largest online encyclopedia, we can automatically construct massive query-focused summarization examples."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"How does their BERT-based model work?",
"How do they use Wikipedia to automatically collect a query-focused summarization dataset?"
],
"question_id": [
"c3ce95658eea1e62193570955f105839de3d7e2d",
"389cc454ac97609e9d0f2b2fe70bf43218dd8ba7"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: An example of the automatic query-focused summarization example construction. Given a statement in Wikipedia article “Marina Beach”, we take the body text of citation as the document, use the article title along with section titles to form a query (i.e., “Marina Beach, Incidents”), and the statement is the summary.",
"Figure 2: The overview of the proposed BERT-based extractive summarization model. We use special tokens (e.g., [L1], [L2]) to indicate hierarchial structure in queries. We surround each sentence with a [CLS] token before and a [SEP] token after. The input representations of each token are composed of three embeddings. The hidden vectors of [CLS] tokens from the last layer are used to represent and score sentences.",
"Figure 3: Illustration of WIKIREF examples creation using Wikipedia and reference pages.",
"Table 1: Statistics of the WIKIREF dataset.",
"Table 2: Quality rating results of human evaluation on the WIKIREF dataset. “Query Relatedness”: 2 for summary completely related to the query, 1 for summary partially related to the query, 0 otherwise. “Doc Salience”: 2 for summary conveys all salient document content, 1 for summary conveys partial salient document content, 0 otherwise.",
"Table 3: ROUGE scores of baselines and the proposed model on WIKIREF dataset. “Class” and “Reg” represent classification and regression, which indicate the supervision type used for training. “- Query” indicates removing queries from the input.",
"Table 4: ROUGE scores on the DUC 2005, 2006 and 2007 datasets. “*” indicates results taken from Ren et al. (2017). “†” indicates results taken from Cao et al. (2016). “DA” is short for data augmentation using WIKIREF dataset. “DA Pre-trained” denotes applying the model pre-trained on augmentation data to directly extract summaries for DUC datasets. “Class” and “Reg” represent classification super and regression, which indicate the supervision type used on augmentation data.",
"Table 5: Statistics of DUC datasets.",
"Figure 4: ROUGE-2 score on the DUC 2007 evaluation set with various training data. The horizontal lines indicate training on DUC 2005, on DUC 2006, and on both. The green line indicates training on only a proportion of DUC 2005 and DUC 2006 examples with the WIKIREF data augmentation. The x-axis indicates the number of used training examples, along with data augmentation."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"8-Figure4-1.png"
]
} | [
"How does their BERT-based model work?",
"How do they use Wikipedia to automatically collect a query-focused summarization dataset?"
] | [
[
"1911.03324-Introduction-3",
"1911.03324-Query-Focused Summarization Model-0"
],
[
"1911.03324-Introduction-1"
]
] | [
"It takes the query and document as input and encodes the query relevance, document context and salient meaning to be passed to the output layer to make the prediction.",
"They use the article and section titles to build a query and use the body text of citation as the summary."
] | 219 |
1910.03177 | Read, Highlight and Summarize: A Hierarchical Neural Semantic Encoder-based Approach | Traditional sequence-to-sequence (seq2seq) models and other variations of the attention-mechanism such as hierarchical attention have been applied to the text summarization problem. Though there is a hierarchy in the way humans use language by forming paragraphs from sentences and sentences from words, hierarchical models have usually not worked that much better than their traditional seq2seq counterparts. This effect is mainly because either the hierarchical attention mechanisms are too sparse using hard attention or noisy using soft attention. In this paper, we propose a method based on extracting the highlights of a document; a key concept that is conveyed in a few sentences. In a typical text summarization dataset consisting of documents that are 800 tokens in length (average), capturing long-term dependencies is very important, e.g., the last sentence can be grouped with the first sentence of a document to form a summary. LSTMs (Long Short-Term Memory) proved useful for machine translation. However, they often fail to capture long-term dependencies while modeling long sequences. To address these issues, we have adapted Neural Semantic Encoders (NSE) to text summarization, a class of memory-augmented neural networks by improving its functionalities and proposed a novel hierarchical NSE that outperforms similar previous models significantly. The quality of summarization was improved by augmenting linguistic factors, namely lemma, and Part-of-Speech (PoS) tags, to each word in the dataset for improved vocabulary coverage and generalization. The hierarchical NSE model on factored dataset outperformed the state-of-the-art by nearly 4 ROUGE points. We further designed and used the first GPU-based self-critical Reinforcement Learning model. | {
"paragraphs": [
[
"When there are a very large number of documents that need to be read in limited time, we often resort to reading summaries instead of the whole document. Automatically generating (abstractive) summaries is a problem with various applications, e.g., automatic authoring BIBREF0. We have developed automatic text summarization systems that condense large documents into short and readable summaries. It can be used for both single (e.g., BIBREF1, BIBREF2 and BIBREF3) and multi-document summarization (e.g.,BIBREF4, BIBREF3, BIBREF5).",
"Text summarization is broadly classified into two categories: extractive (e.g., BIBREF3 and BIBREF6) and abstractive summarization (e.g., BIBREF7, BIBREF8 and BIBREF9). Extractive approaches select sentences from a given document and groups them to form concise summaries. By contrast, abstractive approaches generate human-readable summaries that primarily capture the semantics of input documents and contain rephrased key content. The former task falls under the classification paradigm, and the latter belongs to the generative modeling paradigm, and therefore, it is a much harder problem to solve. The backbone of state-of-the-art summarization models is a typical encoder-decoder BIBREF10 architecture that has proved to be effective for various sequential modeling tasks such as machine translation, sentiment analysis, and natural language generation. It contains an encoder that maps the raw input word vector representations to a latent vector. Then, the decoder usually equipped with a variant of the attention mechanism BIBREF11 uses the latent vectors to generate the output sequence, which is the summary in our case. These models are trained in a supervised learning setting where we minimize the cross-entropy loss between the predicted and the target summary. Encoder-decoder models have proved effective for short sequence tasks such as machine translation where the length of a sequence is less than 120 tokens. However, in text summarization, the length of the sequences vary from 400 to 800 tokens, and modeling long-term dependencies becomes increasingly difficult.",
"Despite the metric's known drawbacks, text summarization models are evaluated using ROUGE BIBREF12, a discrete similarity score between predicted and target summaries based on 1-gram, 2-gram, and n-gram overlap. Cross-entropy loss would be a convenient objective on which to train the model since ROUGE is not differentiable, but doing so would create a mismatch between metrics used for training and evaluation. Though a particular summary scores well on ROUGE evaluation comparable to the target summary, it will be assigned lower probability by a supervised model. To tackle this problem, we have used a self-critic policy gradient method BIBREF13 to train the models directly using the ROUGE score as a reward. In this paper, we propose an architecture that addresses the issues discussed above."
],
[
"Let $D=\\lbrace d_{1}, d_{2}, ..., d_{N}\\rbrace $ be the set of document sentences where each sentence $d_{i}, 1 \\le i \\le N$ is a set of words and $S=\\lbrace s_{1}, s_{2}, ..., s_{M}\\rbrace $ be the set of summary sentences. In general, most of the sentences in $D$ are a continuation of another sentence or related to each other, for example: in terms of factual details or pronouns used. So, dividing the document into multiple paragraphs as done by BIBREF4 leaves out the possibility of a sentence-level dependency between the start and end of a document. Similarly, abstracting a single document sentence as done by BIBREF9 cannot include related information from multiple document sentences. In a good human-written summary, each summary sentence is a compressed version of a few document sentences. Mathematically,",
"Where $C$ is a compressor we intend to learn. Figure FIGREF3 represents the fundamental idea when using a sequence-to-sequence architecture. For a sentence $s$ in summary, the representations of all the related document sentences $d_{1}, d_{2}, ..., d_{K}$ are expected to form a cluster that represents a part of the highlight of the document.",
"First, we adapt the Neural Semantic Encoder (NSE) for text summarization by improving its attention mechanism and compose function. In a standard sequence-to-sequence model, the decoder has access to input sequence through hidden states of an LSTM BIBREF14, which suffers from the difficulties that we discussed above. The NSE is equipped with an additional memory, which maintains a rich representation of words by evolving over time. We then propose a novel hierarchical NSE by using separate word memories for each sentence to enrich the word representations and a document memory to enrich the sentence representations, which performed better than its previous counterparts (BIBREF7, BIBREF3, BIBREF15). Finally, we use a maximum-entropy self-critic model to achieve better performance using ROUGE evaluation."
],
[
"The first encoder-decoder for text summarziation is used by BIBREF1 coupled with an attention mechanism. Though encoder-decoder models gave a state-of-the-art performance for Neural Machine Translation (NMT), the maximum sequence length used in NMT is just 100 tokens. Typical document lengths in text summarization vary from 400 to 800 tokens, and LSTM is not effective due to the loss in memory over time for very long sequences. BIBREF7 used hierarchical attentionBIBREF16 to mitigate this effect where, a word LSTM is used to encode (decode) words, and a sentence LSTM is used to encode (decode) sentences. The use of two LSTMs separately for words and sentences improves the ability of the model to retain its memory for longer sequences. Additionally, BIBREF7 explored using a hierarchical model consisting of a feature-rich encoder incorporating position, Named Entity Recognition (NER) tag, Term Frequency (TF) and Inverse Document Frequency (IDF) scores. Since an RNN is a sequential model, computing at one time-step needs all of the previous time-steps to have computed before and is slow because the computation at all the time steps cannot be performed in parallel. BIBREF8 used convolutional layers coupled with an attention mechanism BIBREF11 to increase the speed of the encoder. Since the input to an RNN is fed sequentially, it is expected to capture the positional information. But both works BIBREF7 and BIBREF8 found positional embeddings to be quite useful for reasons unknown. BIBREF3 proposed an extractive summarization model that classifies sentences based on content, saliency, novelty, and position. To deal with out-of-vocabulary (OOV) words and to facilitate copying salient information from input sequence to the output, BIBREF2 proposed a pointer-generator network that combines pointing BIBREF17 with generation from vocabulary using a soft-switch. Attention models for longer sequences tend to be repetitive due to the decoder repeatedly attending to the same position from the encoder. To mitigate this issue, BIBREF2 used a coverage mechanism to penalize a decoder from attending to same locations of an encoder. However, the pointer generator and the coverage model BIBREF2 are still highly extractive; copying the whole article sentences 35% of the time. BIBREF18 introduced an intra-attention model in which attention also depends on the predictions from previous time steps.",
"One of the main issues with sequence-to-sequence models is that optimization using the cross-entropy objective does not always provide excellent results because the models suffer from a mismatch between the training objective and the evaluation metrics such as ROUGE BIBREF12 and METEOR BIBREF19. A popular algorithm to train a decoder is the teacher-forcing algorithm that minimizes the negative log-likelihood (cross-entropy loss) at each decoding time step given the previous ground-truth outputs. But during the testing stage, the prediction from the previous time-step is fed as input to the decoder instead of the ground truth. This exposure bias results in error accumulation over each time step because the model has never been exposed to its predictions during training. Instead, recent works show that summarization models can be trained using reinforcement learning (RL) where the ROUGE BIBREF12 score is used as the reward (BIBREF18, BIBREF9 and BIBREF4).",
"BIBREF5 made such an earlier attempt by using Q-learning for single-and multi-document summarization. Later, BIBREF15 proposed a coarse-to-fine hierarchical attention model to select a salient sentence using sentence attention using REINFORCE BIBREF20 and feed it to the decoder. BIBREF6 used REINFORCE to rank sentences for extractive summarization. BIBREF4 proposed deep communicating agents that operate over small chunks of a document, which is learned using a self-critical BIBREF13 training approach consisting of intermediate rewards. BIBREF9 used a advantage actor-critic (A2C) method to extract sentences followed by a decoder to form abstractive summaries. Our model does not suffer from their limiting assumption that a summary sentence is an abstracted version of a single source sentence. BIBREF18 trained their intra-attention model using a self-critical policy gradient algorithm BIBREF13. Though an RL objective gives a high ROUGE score, the output summaries are not readable by humans. To mitigate this problem, BIBREF18 used a weighted sum of supervised learning loss and RL loss.",
"Humans first form an abstractive representation of what they want to say and then try to put it into words while communicating. Though it seems intuitive that there is a hierarchy from sentence representation to words, as observed by both BIBREF7 and BIBREF15, these hierarchical attention models failed to outperform a simple attention model BIBREF1. Unlike feedforward networks, RNNs are expected to capture the input sequence order. But strangely, positional embeddings are found to be effective (BIBREF7, BIBREF8, BIBREF15 and BIBREF3). We explored a few approaches to solve these issues and improve the performance of neural models for abstractive summarization."
],
[
"In this section, we first describe the baseline Neural Semantic Encoder (NSE) class, discuss improvements to the compose function and attention mechanism, and then propose the Hierarchical NSE. Finally, we discuss the self-critic model that is used to boost the performance further using ROUGE evaluation."
],
[
"A Neural Semantic Encoder BIBREF21 is a memory augmented neural network augmented with an encoding memory that supports read, compose, and write operations. Unlike the traditional sequence-to-sequence models, using an additional memory relieves the LSTM of the burden to remember the whole input sequence. Even compared to the attention-model BIBREF11 which uses an additional context vector, the NSE has anytime access to the full input sequence through a much larger memory. The encoding memory is evolved using basic operations described as follows:",
"Where, $x_{t} \\in \\mathbb {R}^D$ is the raw embedding vector at the current time-step. $f_{r}^{LSTM}$ , $f_{c}^{MLP}$ (Multi-Layer Perceptron), $f_{w}^{LSTM}$ be the read, compose and write operations respectively. $e_{l} \\in R^{l}$ , $e_{k} \\in R^{k}$ are vectors of ones, $\\mathbf {1}$ is a matrix of ones and $\\otimes $ is the outer product.",
"Instead of using the raw input, the read function $f_{r}^{LSTM}$ in equation DISPLAY_FORM5 uses an LSTM to project the word embeddings to the internal space of memory $M_{t-1}$ to obtain the hidden states $o_{t}$. Now, the alignment scores $z_{t}$ of the past memory $M_{t-1}$ are calculated using $o_{t}$ as the key with a simple dot-product attention mechanism shown in equation DISPLAY_FORM6. A weighted sum gives the retrieved input memory that is used in equation DISPLAY_FORM8 by a Multi-Layer Perceptron in composing new information. Equation DISPLAY_FORM9 uses an LSTM and projects the composed states into the internal space of memory $M_{t-1}$ to obtain the write states $h_{t}$. Finally, in equation DISPLAY_FORM10, the memory is updated by erasing the retrieved memory as per $z_{t}$ and writing as per the write vector $h_{t}$. This process is performed at each time-step throughout the input sequence. The encoded memories $\\lbrace M\\rbrace _{t=1}^{T}$ are similarly used by the decoder to obtain the write vectors $\\lbrace h\\rbrace _{t=1}^{T}$ that are eventually fed to projection and softmax layers to get the vocabulary distribution."
],
[
"Although the vanilla NSE described above performed well for machine translation, just a dot-product attention mechanism is too simplistic for text summarization. In machine translation, it is sufficient to compute the correlation between word-vectors from the semantic spaces of different languages. In contrast, text summarization also needs a word-sentence and sentence-sentence correlation along with the word-word correlation. So, in search of an attention mechanism with a better capacity to model the complex semantic relationships inherent in text summarization, we found that the additive attention mechanism BIBREF11 given by the equation below performs well.",
"Where, $v, W, U, b_{attn}$ are learnable parameters. One other important difference is the compose function: a Multi-layer Perceptron (MLP) is enough for machine translation as the sequences are short in length. However, text summarization consists of longer sequences that have sentence-to-sentence dependencies, and a history of previously composed words is necessary for overcoming repetition BIBREF1 and thereby maintaining novelty. A powerful function already at our disposal is the LSTM; we replaced the MLP with an LSTM, as shown below:",
"In a standard text summarization task, due to the limited size of word vocabulary, out-of-vocabulary (OOV) words are replaced with [UNK] tokens. pointer-networks BIBREF17 facilitate the ability to copy words from the input sequence to the output via pointing. Later, BIBREF2 proposed a hybrid pointer-generator mechanism to improve upon pointing by retaining the ability to generate new words. It points to the words from the input sequence and generates new words from the vocabulary. A generation probability $p_{gen} \\in (0, 1)$ is calculated using the retrieved memories, attention distribution, current input hidden state $o_{t}$ and write state $h_{t}$ as follows:",
"Where, $W_{m}, W_{h}, W_{o}, b_{ptr}$ are learnable parameters, and $\\sigma $ is the sigmoid activation function. Next, $p_{gen}$ is used as a soft switch to choose between generating a word from the vocabulary by sampling from $p_{vocab}$, or copying a word from the input sequence by sampling from the attention distribution $z_{t}$. For each document, we maintain an auxiliary vocabulary of OOV words in the input sequence. We obtain the following final probability distribution over the total extended vocabulary:",
"Note that if $w$ is an OOV word, then $p_{vocab}(w)$ is zero; similarly, if $w$ does not appear in the source document, then $\\sum _{i:w = w_{i}} z_{i}^{t}$ is zero. The ability to produce OOV words is one of the primary advantages of the pointer-generator mechanism. We can also use a smaller vocabulary size and thereby speed up the computation of output projection and softmax layers."
],
[
"When humans read a document, we organize it in terms of word semantics followed by sentence semantics and then document semantics. In a text summarization task, after reading a document, sentences that have similar meanings or continual information are grouped together and then expressed in words. Such a hierarchical model was first introduced by BIBREF16 for document classification and later explored unsuccessfully for text summarization BIBREF3. In this work, we propose to use a hierarchical model with improved NSE to take advantage of both augmented memory and also the hierarchical document representation. We use a separate memory for each sentence to represent all the words of a sentence and a document memory to represent all sentences. Word memory composes novel words, and document memory composes novel sentences in the encoding process that can be later used to extract highlights and decode to summaries as shown in Figure FIGREF17.",
"Let $D = \\lbrace (w_{ij})_{j=1}^{T_{in}}\\rbrace _{i=1}^{S_{in}}$ be the input document sequence, where $S_{in}$ is the number of sentences in a document and $T_{in}$ is the number of words per sentence. Let $\\lbrace M_{i}\\rbrace _{i=1}^{S_{in}}, M_{i} \\in R^{T_{in} \\times D}$ be the sentence memories that encode all the words in a sentence and $M^{d}, M^{d} \\in R^{S_{in} \\times D}$ be the document memory that encodes all the sentences present in the document. At each time-step, an input token $x_{t}$ is read and is used to retrieve aligned content from both corresponding sentence memory $M_{t}^{i, s}$ and document memory $M_{t}^{d}$. Please note that the retrieved document memory, which is a weighted combination of all the sentence representations forms a highlight. After composition, both the sentence and document memories are written simultaneously. This way, the words are encoded with contextual meaning, and also new simpler sentences are formed. The functionality of the model is as follows:",
"Where, $f_{attn}$ is the attention mechanism given by equation(DISPLAY_FORM12). $Update$ remains the same as the vanilla NSE given by equation(DISPLAY_FORM10)and $Concat$ is the vector concatenation. Please note that NSE BIBREF21 has a concept of shared memory but we use multiple memories for representing words and a document memory for representing sentences, this is fundamentally different to a shared memory which does not have a concept of hierarchy."
],
[
"As discussed earlier, training in a supervised learning setting creates a mismatch between training and testing objectives. Also, feeding the ground-truth labels in training time-step creates an exposure bias while testing in which we feed the predictions from the previous time-step. Policy gradient methods overcome this by directly optimizing the non-differentiable metrics such as ROUGE BIBREF12 and METEOR BIBREF19. It can be posed as a Markov Decision Process in which the set of actions $\\mathcal {A}$ is the vocabulary and reward $\\mathcal {R}$ is the ROUGE score itself. So, we should find a policy $\\pi (\\theta )$ such that the set of sampled words $\\tilde{y} = \\lbrace \\tilde{y}_{1}, \\tilde{y}_{2}, ..., \\tilde{y}_{T}\\rbrace $ achieves highest ROUGE score among all possible summaries.",
"We used the self-critical model of BIBREF13 proposed for image captioning. In self-critical sequence training, the REINFORCE algorithm BIBREF20 is used by modifying its baseline as the greedy output of the current model. At each time-step $t$, the model predicts two words: $\\hat{y}_{t}$ sampled from $p(\\hat{y}_{t} | \\hat{y}_{1}, \\hat{y}_{2}, ..., \\hat{y}_{t-1}, x)$, the baseline output that is greedily generated by considering the most probable word from the vocabulary and $\\tilde{y}_{t}$ sampled from the $p(\\tilde{y}_{t} | \\tilde{y}_{1}, \\tilde{y}_{2}, ..., \\tilde{y}_{t-1}, x)$. This model is trained using the following loss function:",
"Using the above training objective, the model learns to generate samples with high probability and thereby increasing $r(\\tilde{y})$ above $r(\\hat{y})$. Additionally, we have used enthttps://stackoverflow.com/questions/19053077/looping-over-data-and-creating-individual-figuresropy regularization.",
"Where, $p(\\tilde{y}_{t})=p(\\tilde{y}_{t} | \\tilde{y}_{1}, \\tilde{y}_{2}, ..., \\tilde{y}_{t-1}, x)$ is the sampling probability and $V$ is the size of the vocabulary. It is similar to the exploration-exploitation trade-off. $\\alpha $ is the regularization coefficient that explicitly controls this trade-off: a higher $\\alpha $ corresponds to more exploration, and a lower $\\alpha $ corresponds to more exploitation. We have found that all TensorFlow based open-source implementations of self-critic models use a function (tf.py_func) that runs only on CPU and it is very slow. To the best of our knowledge, ours is the first GPU based implementation."
],
[
"We used the CNN/Daily Mail dataset BIBREF7, which has been used as the standard benchmark to compare text summarization models. This corpus has 286,817 training pairs, 13,368 validation pairs, and 11,487 test pairs, as defined by their scripts. The source document in the training set has 766 words spanning 29.74 sentences on an average while the summaries consist of 53 words and 3.72 sentences BIBREF7. The unique characteristics of this dataset such as long documents, and ordered multi-sentence summaries present exciting challenges, mainly because the proven sequence-to-sequence LSTM based models find it hard to learn long-term dependencies in long documents. We have used the same train/validation/test split and examples for a fair comparison with the existing models.",
"The factoring of lemma and Part-of-Speech (PoS) tag of surface words, are observed BIBREF22 to increase the performance of NMT models in terms of BLEU score drastically. This is due to the improvement of the vocabulary coverage and better generalization. We have added a pre-processing step by incorporating the lemma and PoS tag to every word of the dataset and training the supervised model on the factored data. The process of extracting the lemma and the PoS tags has been described in BIBREF22. Please refer to the appendix for an example of factoring."
],
[
"For all the plain NSE models, we have truncated the article to a maximum of 400 tokens and the summary to 100 tokens. For the hierarchical NSE models, articles are truncated to have a maximum of 20 sentences and 20 words per sentence each. Shorter sequences are padded with `PAD` tokens. Since the factored models have lemma, PoS tag and the separator `|` for each word, sequence lengths should be close to 3 times the non-factored counterparts. For practical reasons of memory and time, we have used 800 tokens per article and 300 tokens for the summary.",
"For all the models, including the pointer-generator model, we use a vocabulary size of 50,000 words for both source and target. Though some previous works BIBREF7 have used large vocabulary sizes of 150,000, since our models have a copy mechanism, smaller vocabulary is enough to obtain good performance. Large vocabularies increase the computation time. Since memory plays a prominent role in retrieval and update, it is vital to start with a good initialization. We have used 300-dimensional pre-trained GloVe BIBREF23 word-vectors to represent the input sequence to a model. Sentence memories are initialized with GloVe word-vectors of all the words in that sentence. Document memories are initialized with vector representations of all the sentences where a sentence is represented with the average of the GloVe word-vectors of all its words. All the models are trained using the Adam optimizer with the default learning rate of 0.001. We have not applied any regularization as the usage of dropout, and $L_{2}$ penalty resulted in similar performance, however with a drastically increased training time.",
"The Hierarchical models process one sentence at a time, and hence attention distributions need less memory, and therefore, a larger batch size can be used, which in turn speeds up the training process. The non-factored model is trained on 7-NVIDIA Tesla-P100 GPUs with a batch size of 448 (64 examples per GPU); it takes approximately 45 minutes per epoch. Since the factored sequences are long, we used a batch size of 96 (12 examples per GPU) on 8-NVIDIA Tesla-V100 GPUs. The Hier model reaches optimal cross-entropy loss in just 8 epochs, unlike 33-35 epochs for both BIBREF7 and BIBREF2. For the self-critical model, training is started from the best supervised model with a learning rate of 0.00005 and manually changed to 0.00001 when needed with $\\alpha =0.0001$ and the reported results are obtained after training for 15 days."
],
[
"All the models are evaluated using the standard metric ROUGE; we report the F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L, which quantitively represent word-overlap, bigram-overlap, and longest common subsequence between reference summary and the summary that is to be evaluated. The results are obtained using pyrouge package. The performance of various models and our improvements are summarized in Table TABREF37. A direct implementation of NSE performed very poorly due to the simple dot-product attention mechanism. In NMT, a transformation from word-vectors in one language to another one (say English to French) using a mere matrix multiplication is enough because of the one-to-one correspondence between words and the underlying linear structure imposed in learning the word vectors BIBREF23. However, in text summarization a word (sentence) could be a condensation of a group of words (sentences). Therefore, using a complex neural network-based attention mechanism proposed improved the performance. Both dot-product and additive BIBREF11 mechanisms perform similarly for the NMT task, but the difference is more pronounced for the text summarization task simply because of the nature of the problem as described earlier. Replacing Multi-Layered Perceptron (MLP) in the NSE with an LSTM further improved the performance because it remembers what was previously composed and facilitates the composition of novel words. This also eliminates the need for additional mechanisms to penalize repetitions such as coverage BIBREF2 and intra-attention BIBREF18. Finally, using memories for each sentence enriches the corresponding word representation, and the document memory enriches the sentence representation that help the decoder. Please refer to the appendix for a few example outputs. Table TABREF34 shows the results in comparison to the previous methods. Our hierarchical model outperforms BIBREF7 (HIER) by 5 ROUGE points. Our factored model achieves the new state-of-the-art (SoTA) result, outperforming BIBREF4 by almost 4 ROUGE points."
],
[
"In this work, we presented a memory augmented neural network for the text summarization task that addresses the shortcomings of LSTM-based models. We applied a critical pre-processing step by factoring the dataset with inherent linguistic information that outperforms the state-of-the-art by a large margin. In the future, we will explore new sparse functions BIBREF24 to enforce strict sparsity in selecting highlights out of sentences. The general framework of pre-processing, and extracting highlights can also be used with powerful pre-trained models like BERT BIBREF25 and XLNet BIBREF26."
],
[
"Figure FIGREF38 below shows the self-critical model. All the examples shown in Tables TABREF39-TABREF44 are chosen as per the shortest article lengths available due to space constraints."
]
],
"section_name": [
"Introduction",
"Introduction ::: Problem Formulation",
"Related Work",
"Proposed Models",
"Proposed Models ::: Neural Semantic Encoder:",
"Proposed Models ::: Improved NSE",
"Proposed Models ::: Hierarchical NSE",
"Proposed Models ::: Self-Critical Sequence Training",
"Experiments and Results ::: Dataset",
"Experiments and Results ::: Training Settings",
"Experiments and Results ::: Evaluation",
"Conclusion",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"0aa5b65961017d64969fba8c63a3b594b7557faf",
"619665522761e69c57b11b6451e9a5574812b87c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We used the self-critical model of BIBREF13 proposed for image captioning. In self-critical sequence training, the REINFORCE algorithm BIBREF20 is used by modifying its baseline as the greedy output of the current model. At each time-step $t$, the model predicts two words: $\\hat{y}_{t}$ sampled from $p(\\hat{y}_{t} | \\hat{y}_{1}, \\hat{y}_{2}, ..., \\hat{y}_{t-1}, x)$, the baseline output that is greedily generated by considering the most probable word from the vocabulary and $\\tilde{y}_{t}$ sampled from the $p(\\tilde{y}_{t} | \\tilde{y}_{1}, \\tilde{y}_{2}, ..., \\tilde{y}_{t-1}, x)$. This model is trained using the following loss function:",
"Using the above training objective, the model learns to generate samples with high probability and thereby increasing $r(\\tilde{y})$ above $r(\\hat{y})$. Additionally, we have used enthttps://stackoverflow.com/questions/19053077/looping-over-data-and-creating-individual-figuresropy regularization.",
"Where, $p(\\tilde{y}_{t})=p(\\tilde{y}_{t} | \\tilde{y}_{1}, \\tilde{y}_{2}, ..., \\tilde{y}_{t-1}, x)$ is the sampling probability and $V$ is the size of the vocabulary. It is similar to the exploration-exploitation trade-off. $\\alpha $ is the regularization coefficient that explicitly controls this trade-off: a higher $\\alpha $ corresponds to more exploration, and a lower $\\alpha $ corresponds to more exploitation. We have found that all TensorFlow based open-source implementations of self-critic models use a function (tf.py_func) that runs only on CPU and it is very slow. To the best of our knowledge, ours is the first GPU based implementation."
],
"extractive_spans": [
"We used the self-critical model of BIBREF13 proposed for image captioning",
"Additionally, we have used enthttps://stackoverflow.com/questions/19053077/looping-over-data-and-creating-individual-figuresropy regularization.",
"To the best of our knowledge, ours is the first GPU based implementation."
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the self-critical model of BIBREF13 proposed for image captioning. In self-critical sequence training, the REINFORCE algorithm BIBREF20 is used by modifying its baseline as the greedy output of the current model. At each time-step $t$, the model predicts two words: $\\hat{y}_{t}$ sampled from $p(\\hat{y}_{t} | \\hat{y}_{1}, \\hat{y}_{2}, ..., \\hat{y}_{t-1}, x)$, the baseline output that is greedily generated by considering the most probable word from the vocabulary and $\\tilde{y}_{t}$ sampled from the $p(\\tilde{y}_{t} | \\tilde{y}_{1}, \\tilde{y}_{2}, ..., \\tilde{y}_{t-1}, x)$. This model is trained using the following loss function:\n\nUsing the above training objective, the model learns to generate samples with high probability and thereby increasing $r(\\tilde{y})$ above $r(\\hat{y})$. Additionally, we have used enthttps://stackoverflow.com/questions/19053077/looping-over-data-and-creating-individual-figuresropy regularization.\n\nWhere, $p(\\tilde{y}_{t})=p(\\tilde{y}_{t} | \\tilde{y}_{1}, \\tilde{y}_{2}, ..., \\tilde{y}_{t-1}, x)$ is the sampling probability and $V$ is the size of the vocabulary. It is similar to the exploration-exploitation trade-off. $\\alpha $ is the regularization coefficient that explicitly controls this trade-off: a higher $\\alpha $ corresponds to more exploration, and a lower $\\alpha $ corresponds to more exploitation. We have found that all TensorFlow based open-source implementations of self-critic models use a function (tf.py_func) that runs only on CPU and it is very slow. To the best of our knowledge, ours is the first GPU based implementation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"3dce52bf3edf5ee7a5dc95e9379931e53617efa0",
"6fcb9e8572efe14719c16f323368ca200008764e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model."
],
"extractive_spans": [],
"free_form_answer": "Abstractive and extractive models from Nallapati et al., 2016, Pointer generator models with and without coverage from See et al., 2017, and Reinforcement Learning models from Paulus et al., 2018, and Celikyilmaz et al., 2018.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"All the models are evaluated using the standard metric ROUGE; we report the F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L, which quantitively represent word-overlap, bigram-overlap, and longest common subsequence between reference summary and the summary that is to be evaluated. The results are obtained using pyrouge package. The performance of various models and our improvements are summarized in Table TABREF37. A direct implementation of NSE performed very poorly due to the simple dot-product attention mechanism. In NMT, a transformation from word-vectors in one language to another one (say English to French) using a mere matrix multiplication is enough because of the one-to-one correspondence between words and the underlying linear structure imposed in learning the word vectors BIBREF23. However, in text summarization a word (sentence) could be a condensation of a group of words (sentences). Therefore, using a complex neural network-based attention mechanism proposed improved the performance. Both dot-product and additive BIBREF11 mechanisms perform similarly for the NMT task, but the difference is more pronounced for the text summarization task simply because of the nature of the problem as described earlier. Replacing Multi-Layered Perceptron (MLP) in the NSE with an LSTM further improved the performance because it remembers what was previously composed and facilitates the composition of novel words. This also eliminates the need for additional mechanisms to penalize repetitions such as coverage BIBREF2 and intra-attention BIBREF18. Finally, using memories for each sentence enriches the corresponding word representation, and the document memory enriches the sentence representation that help the decoder. Please refer to the appendix for a few example outputs. Table TABREF34 shows the results in comparison to the previous methods. Our hierarchical model outperforms BIBREF7 (HIER) by 5 ROUGE points. Our factored model achieves the new state-of-the-art (SoTA) result, outperforming BIBREF4 by almost 4 ROUGE points.",
"FLOAT SELECTED: Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model.",
"FLOAT SELECTED: Table 2: Performance of various NSE models on CNN/Daily Mail corpus. Please note that the data is not factored here."
],
"extractive_spans": [],
"free_form_answer": "HierAttn \nabstractive model \nPointer Generator \nPointer Generator + coverage \nMLE+RL, with intra-attention\n DCA, MLE+RL\nPlain NSE",
"highlighted_evidence": [
"TABREF34 shows the results in comparison to the previous methods. Our hierarchical model outperforms BIBREF7 (HIER) by 5 ROUGE points. Our factored model achieves the new state-of-the-art (SoTA) result, outperforming BIBREF4 by almost 4 ROUGE points.",
"FLOAT SELECTED: Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model.",
"The performance of various models and our improvements are summarized in Table TABREF37",
"FLOAT SELECTED: Table 2: Performance of various NSE models on CNN/Daily Mail corpus. Please note that the data is not factored here."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5e3e382b7704b26b88492038ec503e65307c11d5",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"02650104141756128784edf838b14dc4e8d8a10d",
"bc9beb17d26403fcd8ed0c34cf802a06de589aea"
],
"answer": [
{
"evidence": [
"All the models are evaluated using the standard metric ROUGE; we report the F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L, which quantitively represent word-overlap, bigram-overlap, and longest common subsequence between reference summary and the summary that is to be evaluated. The results are obtained using pyrouge package. The performance of various models and our improvements are summarized in Table TABREF37. A direct implementation of NSE performed very poorly due to the simple dot-product attention mechanism. In NMT, a transformation from word-vectors in one language to another one (say English to French) using a mere matrix multiplication is enough because of the one-to-one correspondence between words and the underlying linear structure imposed in learning the word vectors BIBREF23. However, in text summarization a word (sentence) could be a condensation of a group of words (sentences). Therefore, using a complex neural network-based attention mechanism proposed improved the performance. Both dot-product and additive BIBREF11 mechanisms perform similarly for the NMT task, but the difference is more pronounced for the text summarization task simply because of the nature of the problem as described earlier. Replacing Multi-Layered Perceptron (MLP) in the NSE with an LSTM further improved the performance because it remembers what was previously composed and facilitates the composition of novel words. This also eliminates the need for additional mechanisms to penalize repetitions such as coverage BIBREF2 and intra-attention BIBREF18. Finally, using memories for each sentence enriches the corresponding word representation, and the document memory enriches the sentence representation that help the decoder. Please refer to the appendix for a few example outputs. Table TABREF34 shows the results in comparison to the previous methods. Our hierarchical model outperforms BIBREF7 (HIER) by 5 ROUGE points. Our factored model achieves the new state-of-the-art (SoTA) result, outperforming BIBREF4 by almost 4 ROUGE points.",
"FLOAT SELECTED: Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model."
],
"extractive_spans": [],
"free_form_answer": "ROUGE-1 41.69\nROUGE-2 19.47\nROUGE-L 37.92",
"highlighted_evidence": [
"Table TABREF34 shows the results in comparison to the previous methods. Our hierarchical model outperforms BIBREF7 (HIER) by 5 ROUGE points. Our factored model achieves the new state-of-the-art (SoTA) result, outperforming BIBREF4 by almost 4 ROUGE points.",
"FLOAT SELECTED: Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model."
],
"extractive_spans": [],
"free_form_answer": "41.69 ROUGE-1",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"5e3e382b7704b26b88492038ec503e65307c11d5"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How is GPU-based self-critical Reinforcement Learing model designed?",
"What are previoius similar models authors are referring to?",
"What was previous state of the art on factored dataset?"
],
"question_id": [
"2c4db4398ecff7e4c1c335a2cb3864bfdc31df1a",
"4738158f92b5b520ceba6207e8029ae082786dbe",
"4dadde7c61230553ef14065edd8c1c7e41b9c329"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Document sentences are first projected into a semantic space typically by an encoder in a sequence-to-sequence model. g1, g2, g3 are highlights of a document representing closely related sentence-semantics {h(1)1 , h (1) 2 , h (1) 3 }, {h (2) 1 , h (2) 2 , h (2) 3 }, {h (3) 1 , h (3) 2 , h (3) 3 } respectively. These highlights are then used by the decoder to form concise summaries.",
"Figure 2: Hierarchical NSE: From a given article, all the M sentences consisting of N words each are processed by the NSE using read (R), compose (C) and write (W) operations. Each sentence memory is updated N times by each word in the sentence ({M (k)si }Nk=1). After the last encoder step, all the updated sentence memories MNs1 ,M N s2 , ...,M N sM are concatenated to form the cumulative sentence memory Ms. The decoder then uses the cumulative sentence memory Ms and document memory Md in a similar fashion to produce the write vectors ht that are passed through a softmax layer to obtain the vocabulary distribution.",
"Table 1: ROUGE F1 scores on the test set. Our hierarchical (Hier-NSE) model outperform previous hierarchical and pointer-generator models. Hier-NSE-factor is the factored model and Hier-NSE-SC is the self-critic model.",
"Table 2: Performance of various NSE models on CNN/Daily Mail corpus. Please note that the data is not factored here.",
"Figure 3: Self-Critic training reduces exposure bias and by learning a policy whose samples score better than the greedy samples that are used during test time in a supervised learning setting.",
"Table 3: Sample outputs for both non-factored and factored input articles. While factoring, each surface word is augmented with lemma and PoS tag separated by |.",
"Table 4: Sample outputs for both non-factored and factored input articles. While factoring, each surface word is augmented with lemma and PoS tag separated by |.",
"Table 5: Sample outputs from the hierarchical NSE and self-critical model.",
"Table 6: Factored input and outputs for the same example used in Table 5.",
"Table 7: Sample outputs from the hierarchical NSE and self-critical model.",
"Table 8: Factored input and outputs for the same example used in Table 7."
],
"file": [
"3-Figure1-1.png",
"6-Figure2-1.png",
"9-Table1-1.png",
"9-Table2-1.png",
"12-Figure3-1.png",
"13-Table3-1.png",
"14-Table4-1.png",
"15-Table5-1.png",
"15-Table6-1.png",
"16-Table7-1.png",
"16-Table8-1.png"
]
} | [
"What are previoius similar models authors are referring to?",
"What was previous state of the art on factored dataset?"
] | [
[
"1910.03177-Experiments and Results ::: Evaluation-0",
"1910.03177-9-Table1-1.png",
"1910.03177-9-Table2-1.png"
],
[
"1910.03177-Experiments and Results ::: Evaluation-0",
"1910.03177-9-Table1-1.png"
]
] | [
"HierAttn \nabstractive model \nPointer Generator \nPointer Generator + coverage \nMLE+RL, with intra-attention\n DCA, MLE+RL\nPlain NSE",
"41.69 ROUGE-1"
] | 220 |
2004.02143 | Reinforced Multi-task Approach for Multi-hop Question Generation | Question generation (QG) attempts to solve the inverse of question answering (QA) problem by generating a natural language question given a document and an answer. While sequence to sequence neural models surpass rule-based systems for QG, they are limited in their capacity to focus on more than one supporting fact. For QG, we often require multiple supporting facts to generate high-quality questions. Inspired by recent works on multi-hop reasoning in QA, we take up Multi-hop question generation, which aims at generating relevant questions based on supporting facts in the context. We employ multitask learning with the auxiliary task of answer-aware supporting fact prediction to guide the question generator. In addition, we also proposed a question-aware reward function in a Reinforcement Learning (RL) framework to maximize the utilization of the supporting facts. We demonstrate the effectiveness of our approach through experiments on the multi-hop question answering dataset, HotPotQA. Empirical evaluation shows our model to outperform the single-hop neural question generation models on both automatic evaluation metrics such as BLEU, METEOR, and ROUGE, and human evaluation metrics for quality and coverage of the generated questions. | {
"paragraphs": [
[
"In natural language processing (NLP), question generation is considered to be an important yet challenging problem. Given a passage and answer as inputs to the model, the task is to generate a semantically coherent question for the given answer.",
"In the past, question generation has been tackled using rule-based approaches such as question templates BIBREF0 or utilizing named entity information and predictive argument structures of sentences BIBREF1. Recently, neural-based approaches have accomplished impressive results BIBREF2, BIBREF3, BIBREF4 for the task of question generation. The availability of large-scale machine reading comprehension datasets such as SQuAD BIBREF5, NewsQA BIBREF6, MSMARCO BIBREF7 etc. have facilitated research in question answering task. SQuAD BIBREF5 dataset itself has been the de facto choice for most of the previous works in question generation. However, 90% of the questions in SQuAD can be answered from a single sentence BIBREF8, hence former QG systems trained on SQuAD are not capable of distilling and utilizing information from multiple sentences. Recently released multi-hop datasets such as QAngaroo BIBREF9, ComplexWebQuestions BIBREF10 and HotPotQA BIBREF11 are more suitable for building QG systems that required to gather and utilize information across multiple documents as opposed to a single paragraph or sentence.",
"In multi-hop question answering, one has to reason over multiple relevant sentences from different paragraphs to answer a given question. We refer to these relevant sentences as supporting facts in the context. Hence, we frame Multi-hop question generation as the task of generating the question conditioned on the information gathered from reasoning over all the supporting facts across multiple paragraphs/documents. Since this task requires assembling and summarizing information from multiple relevant documents in contrast to a single sentence/paragraph, therefore, it is more challenging than the existing single-hop QG task. Further, the presence of irrelevant information makes it difficult to capture the supporting facts required for question generation. The explicit information about the supporting facts in the document is not often readily available, which makes the task more complex. In this work, we provide an alternative to get the supporting facts information from the document with the help of multi-task learning. Table TABREF1 gives sample examples from SQuAD and HotPotQA dataset. It is cleared from the example that the single-hop question is formed by focusing on a single sentence/document and answer, while in multi-hop question, multiple supporting facts from different documents and answer are accumulated to form the question.",
"Multi-hop QG has real-world applications in several domains, such as education, chatbots, etc. The questions generated from the multi-hop approach will inspire critical thinking in students by encouraging them to reason over the relationship between multiple sentences to answer correctly. Specifically, solving these questions requires higher-order cognitive-skills (e.g., applying, analyzing). Therefore, forming challenging questions is crucial for evaluating a student’s knowledge and stimulating self-learning. Similarly, in goal-oriented chatbots, multi-hop QG is an important skill for chatbots, e.g., in initiating conversations, asking and providing detailed information to the user by considering multiple sources of information. In contrast, in a single-hop QG, only single source of information is considered while generation.",
"In this paper, we propose to tackle Multi-hop QG problem in two stages. In the first stage, we learn supporting facts aware encoder representation to predict the supporting facts from the documents by jointly training with question generation and subsequently enforcing the utilization of these supporting facts. The former is achieved by sharing the encoder weights with an answer-aware supporting facts prediction network, trained jointly in a multi-task learning framework. The latter objective is formulated as a question-aware supporting facts prediction reward, which is optimized alongside supervised sequence loss. Additionally, we observe that multi-task framework offers substantial improvements in the performance of question generation and also avoid the inclusion of noisy sentences information in generated question, and reinforcement learning (RL) brings the complete and complex question to otherwise maximum likelihood estimation (MLE) optimized QG model.",
"Our main contributions in this work are: (i). We introduce the problem of multi-hop question generation and propose a multi-task training framework to condition the shared encoder with supporting facts information. (ii). We formulate a novel reward function, multihop-enhanced reward via question-aware supporting fact predictions to enforce the maximum utilization of supporting facts to generate a question; (iii). We introduce an automatic evaluation metric to measure the coverage of supporting facts in the generated question. (iv). Empirical results show that our proposed method outperforms the current state-of-the-art single-hop QG models over several automatic and human evaluation metrics on the HotPotQA dataset."
],
[
"Question generation literature can be broadly divided into two classes based on the features used for generating questions. The former regime consists of rule-based approaches BIBREF12, BIBREF1 that rely on human-designed features such as named-entity information, etc. to leverage the semantic information from a context for question generation. In the second category, question generation problem is treated as a sequence-to-sequence BIBREF13 learning problem, which involves automatic learning of useful features from the context by leveraging the sheer volume of training data. The first neural encoder-decoder model for question generation was proposed in BIBREF2. However, this work does not take the answer information into consideration while generating the question. Thereafter, several neural-based QG approaches BIBREF3, BIBREF14, BIBREF15 have been proposed that utilize the answer position information and copy mechanism. BIBREF16 and BIBREF17 demonstrated an appreciable improvement in the performance of the QG task when trained in a multi-task learning framework.",
"The model proposed by BIBREF18, BIBREF19 for single-document QA experience a significant drop in accuracy when applied in multiple documents settings. This shortcoming of single-document QA datasets is addressed by newly released multi-hop datasets BIBREF9, BIBREF10, BIBREF11 that promote multi-step inference across several documents. So far, multi-hop datasets have been predominantly used for answer generation tasks BIBREF20, BIBREF21, BIBREF22. Our work can be seen as an extension to single hop question generation where a non-trivial number of supporting facts are spread across multiple documents."
],
[
"In multi-hop question generation, we consider a document list $L$ with $n_L$ documents, and an $m$-word answer $A$. Let the total number of words in all the documents $D_i \\in L$ combined be $N$. Let a document list $L$ contains a total of $K$ candidate sentences $CS=\\lbrace S_1, S_2, \\ldots , S_K\\rbrace $ and a set of supporting facts $SF$ such that $SF \\in CS$. The answer $A=\\lbrace w_{D_k^{a_1}} , w_{D_k^{a_2}}, \\ldots , w_{D_k^{a_m}} \\rbrace $ is an $m$-length text span in one of the documents $D_k \\in L$. Our task is to generate an $n_Q$-word question sequence $\\hat{Q}= \\lbrace y_1, y_2, \\ldots , y_{n_Q} \\rbrace $ whose answer is based on the supporting facts $SF$ in document list $L$. Our proposed model for multi-hop question generation is depicted in Figure FIGREF2."
],
[
"In this section, we discuss the various components of our proposed Multi-Hop QG model. Our proposed model has four components (i). Document and Answer Encoder which encodes the list of documents and answer to further generate the question, (ii). Multi-task Learning to facilitate the QG model to automatically select the supporting facts to generate the question, (iii). Question Decoder, which generates questions using the pointer-generator mechanism and (iv). MultiHop-Enhanced QG component which forces the model to generate those questions which can maximize the supporting facts prediction based reward."
],
[
"The encoder of the Multi-Hop QG model encodes the answer and documents using the layered Bi-LSTM network."
],
[
"We introduce an answer tagging feature that encodes the relative position information of the answer in a list of documents. The answer tagging feature is an $N$ length list of vector of dimension $d_1$, where each element has either a tag value of 0 or 1. Elements that correspond to the words in the answer text span have a tag value of 1, else the tag value is 0. We map these tags to the embedding of dimension $d_1$. We represent the answer encoding features using $\\lbrace a_1, \\ldots , a_N\\rbrace $."
],
[
"To encode the document list $L$, we first concatenate all the documents $D_k \\in L$, resulting in a list of $N$ words. Each word in this list is then mapped to a $d_2$ dimensional word embedding $u \\in \\mathbb {R}^{d_2}$. We then concatenate the document word embeddings with answer encoding features and feed it to a bi-directional LSTM encoder $\\lbrace LSTM^{fwd}, LSTM^{bwd}\\rbrace $.",
"We compute the forward hidden states $\\vec{z}_{t}$ and the backward hidden states $ \\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{z}}}_{t}$ and concatenate them to get the final hidden state $z_{t} = [\\vec{z}_{t} \\oplus \\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{z}}}_{t}]$. The answer-aware supporting facts predictions network (will be introduced shortly) takes the encoded representation as input and predicts whether the candidate sentence is a supporting fact or not. We represent the predictions with $p_1, p_2, \\ldots , p_K$. Similar to answer encoding, we map each prediction $p_i$ with a vector $v_i$ of dimension $d_3$.",
"A candidate sentence $S_i$ contains the $n_i$ number of words. In a given document list $L$, we have $K$ candidate sentences such that $\\sum _{i=1}^{i=K} n_i = N$. We generate the supporting fact encoding $sf_i \\in \\mathbb {R}^{n_i \\times d_3}$ for the candidate sentence $S_i$ as follows:",
"where $e_{n_i} \\in \\mathbb {R}^{n_i}$ is a vector of 1s. The rows of $sf_i$ denote the supporting fact encoding of the word present in the candidate sentence $S_i$. We denote the supporting facts encoding of a word $w_t$ in the document list $L$ with $s_t \\in \\mathbb {R}^{d_3}$. Since, we also deal with the answer-aware supporting facts predictions in a multi-task setting, therefore, to obtain a supporting facts induced encoder representation, we introduce another Bi-LSTM layer.",
"Similar to the first encoding layer, we concatenate the forward and backward hidden states to obtain the final hidden state representation."
],
[
"We introduce the task of answer-aware supporting facts prediction to condition the QG model's encoder with the supporting facts information. Multi-task learning facilitates the QG model to automatically select the supporting facts conditioned on the given answer. This is achieved by using a multi-task learning framework where the answer-aware supporting facts prediction network and Multi-hop QG share a common document encoder (Section SECREF8). The network takes the encoded representation of each candidate sentence $S_i \\in CS$ as input and sentence-wise predictions for the supporting facts. More specifically, we concatenate the first and last hidden state representation of each candidate sentence from the encoder outputs and pass it through a fully-connected layer that outputs a Sigmoid probability for the sentence to be a supporting fact. The architecture of this network is illustrated in Figure FIGREF2 (left). This network is then trained with a binary cross entropy loss and the ground-truth supporting facts labels:",
"where $N$ is the number of document list, $S$ the number of candidate sentences in a particular training example, $\\delta _i^j$ and $p_i^{j}$ represent the ground truth supporting facts label and the output Sigmoid probability, respectively."
],
[
"We use a LSTM network with global attention mechanism BIBREF23 to generate the question $\\hat{Q} = \\lbrace y_1, y_2, \\ldots , y_m\\rbrace $ one word at a time. We use copy mechanism BIBREF24, BIBREF25 to deal with rare or unknown words. At each timestep $t$,",
"The attention distribution $\\alpha _t$ and context vector $c_t$ are obtained using the following equations:",
"The probability distribution over the question vocabulary is then computed as,",
"where $\\mathbf {W_q}$ is a weight matrix. The probability of picking a word (generating) from the fixed vocabulary words, or the probability of not copying a word from the document list $L$ at a given timestep $t$ is computed by the following equation:",
"where, $\\mathbf {W_a}$ and $\\mathbf {W_b}$ are the weight matrices and $\\sigma $ represents the Sigmoid function. The probability distribution over the words in the document is computed by summing over all the attention scores of the corresponding words:",
"where $\\mathbf {1}\\lbrace w==w_i\\rbrace $ denotes the vector of length $N$ having the value 1 where $w==w_i$, otherwise 0. The final probability distribution over the dynamic vocabulary (document and question vocabulary) is calculated by the following:"
],
[
"We introduce a reinforcement learning based reward function and sequence training algorithm to train the RL network. The proposed reward function forces the model to generate those questions which can maximize the reward."
],
[
"Our reward function is a neural network, we call it Question-Aware Supporting Fact Prediction network. We train our neural network based reward function for the supporting fact prediction task on HotPotQA dataset. This network takes as inputs the list of documents $L$ and the generated question $\\hat{Q}$, and predicts the supporting fact probability for each candidate sentence. This model subsumes the latest technical advances of question answering, including character-level models, self-attention BIBREF26, and bi-attention BIBREF18. The network architecture of the supporting facts prediction model is similar to BIBREF11, as shown in Figure FIGREF2 (right). For each candidate sentence in the document list, we concatenate the output of the self-attention layer at the first and last positions, and use a binary linear classifier to predict the probability that the current sentence is a supporting fact. This network is pre-trained on HotPotQA dataset using binary cross-entropy loss.",
"For each generated question, we compute the F1 score (as a reward) between the ground truth supporting facts and the predicted supporting facts. This reward is supposed to be carefully used because the QG model can cheat by greedily copying words from the supporting facts to the generated question. In this case, even though high MER is achieved, the model loses the question generation ability. To handle this situation, we regularize this reward function with additional Rouge-L reward, which avoids the process of greedily copying words from the supporting facts by ensuring the content matching between the ground truth and generated question. We also experiment with BLEU as an additional reward, but Rouge-L as a reward has shown to outperform the BLEU reward function."
],
[
"We use the REINFORCE BIBREF27 algorithm to learn the policy defined by question generation model parameters, which can maximize our expected rewards. To avoid the high variance problem in the REINFORCE estimator, self-critical sequence training (SCST) BIBREF28 framework is used for sequence training that uses greedy decoding score as a baseline. In SCST, during training, two output sequences are produced: $y^{s}$, obtained by sampling from the probability distribution $P(y^s_t | y^s_1, \\ldots , y^s_{t-1}, \\mathcal {D})$, and $y^g$, the greedy-decoding output sequence. We define $r(y,y^*)$ as the reward obtained for an output sequence $y$, when the ground truth sequence is $y^*$. The SCST loss can be written as,",
"where, $R= \\sum _{t=1}^{n^{\\prime }} \\log P(y^s_t | y^s_1, \\ldots , y^s_{t-1}, \\mathcal {D}) $. However, the greedy decoding method only considers the single-word probability, while the sampling considers the probabilities of all words in the vocabulary. Because of this the greedy reward $r(y^{g},y^*)$ has higher variance than the Monte-Carlo sampling reward $r(y^{s}, y^*)$, and their gap is also very unstable. We experiment with the SCST loss and observe that greedy strategy causes SCST to be unstable in the training progress. Towards this, we introduce a weight history factor similar to BIBREF29. The history factor is the ratio of the mean sampling reward and mean greedy strategy reward in previous $k$ iterations. We update the SCST loss function in the following way:",
"where $\\alpha $ is a hyper-parameter, $t$ is the current iteration, $h$ is the history determines, the number of previous rewards are used to estimate. The denominator of the history factor is used to normalize the current greedy reward $ r(y^{g},y^*)$ with the mean greedy reward of previous $h$ iterations. The numerator of the history factor ensures the greedy reward has a similar magnitude with the mean sample reward of previous $h$ iterations.",
""
],
[
"With $y^* = \\lbrace y^*_1, y^*_2, \\ldots , y^*_{m}\\rbrace $ as the ground-truth output sequence for a given input sequence $D$, the maximum-likelihood training objective can be written as,",
"We use a mixed-objective learning function BIBREF32, BIBREF33 to train the final network:",
"where $\\gamma _1$, $\\gamma _2$, and $\\gamma _3$ correspond to the weights of $\\mathcal {L}_{rl}$, $\\mathcal {L}_{ml}$, and $\\mathcal {L}_{sp}$, respectively. In our experiments, we use the same vocabulary for both the encoder and decoder. Our vocabulary consists of the top 50,000 frequent words from the training data. We use the development dataset for hyper-parameter tuning. Pre-trained GloVe embeddings BIBREF34 of dimension 300 are used in the document encoding step. The hidden dimension of all the LSTM cells is set to 512. Answer tagging features and supporting facts position features are embedded to 3-dimensional vectors. The dropout BIBREF35 probability $p$ is set to $0.3$. The beam size is set to 4 for beam search. We initialize the model parameters randomly using a Gaussian distribution with Xavier scheme BIBREF36. We first pre-train the network by minimizing only the maximum likelihood (ML) loss. Next, we initialize our model with the pre-trained ML weights and train the network with the mixed-objective learning function. The following values of hyperparameters are found to be optimal: (i) $\\gamma _1=0.99$, $\\gamma _2=0.01$, $\\gamma _3=0.1$, (ii) $d_1=300$, $d_2=d_3=3$, (iii) $\\alpha =0.9, \\beta = 10$, $h=5000$. Adam BIBREF37 optimizer is used to train the model with (i) $ \\beta _{1} = 0.9 $, (ii) $ \\beta _{2} = 0.999 $, and (iii) $ \\epsilon =10^{-8} $. For MTL-QG training, the initial learning rate is set to $0.01$. For our proposed model training the learning rate is set to $0.00001$. We also apply gradient clipping BIBREF38 with range $ [-5, 5] $."
],
[
"We use the HotPotQA BIBREF11 dataset to evaluate our methods. This dataset consists of over 113k Wikipedia-based question-answer pairs, with each question requiring multi-step reasoning across multiple supporting documents to infer the answer. While there exists other multi-hop datasets BIBREF9, BIBREF10, only HotPotQA dataset provides the sentence-level ground-truth labels to locate the supporting facts in the list of documents. We combine the training set ($90,564$) and development set ($7,405$) and randomly split the resulting data, with 80% for training, 10% for development, 10% for testing.",
"We conduct experiments to evaluate the performance of our proposed and other QG methods using the evaluation metrics: BLEU-1, BLEU-2, BLEU-3, BLEU-4 BIBREF39, ROUGE-L BIBREF40 and METEOR BIBREF41.",
"Metric for MultiHoping in QG: To assess the multi-hop capability of the question generation model, we introduce additional metric SF coverage, which measures in terms of F1 score. This metric is similar to MultiHop-Enhanced Reward, where we use the question-aware supporting facts predictions network that takes the generated question and document list as input and predict the supporting facts. F1 score measures the average overlap between the predicted and ground-truth supporting facts as computed in BIBREF11."
],
[
"We first describe some variants of our proposed MultiHop-QG model.",
"(1) SharedEncoder-QG: This is an extension of the NQG model BIBREF30 with shared encoder for QG and answer-aware supporting fact predictions tasks. This model is a variant of our proposed model, where we encode the document list using a two-layer Bi-LSTM which is shared between both the tasks. The input to the shared Bi-LSTM is word and answer encoding as shown in Eq. DISPLAY_FORM9. The decoder is a single-layer LSTM which generates the multi-hop question.",
"(2) MTL-QG: This variant is similar to the SharedEncoder-QG, here we introduce another Bi-LSTM layer which takes the question, answer and supporting fact embedding as shown in Eq. DISPLAY_FORM11.",
"The automatic evaluation scores of our proposed method, baselines, and state-of-the-art single-hop question generation model on the HotPotQA test set are shown in Table TABREF26. The performance improvements with our proposed model over the baselines and state-of-the-arts are statistically significant as $(p <0.005)$. For the question-aware supporting fact prediction model (c.f. SECREF21), we obtain the F1 and EM scores of $84.49$ and $44.20$, respectively, on the HotPotQA development dataset. We can not directly compare the result ($21.17$ BLEU-4) on the HotPotQA dataset reported in BIBREF44 as their dataset split is different and they only use the ground-truth supporting facts to generate the questions.",
"We also measure the multi-hopping in terms of SF coverage and reported the results in Table TABREF26 and Table TABREF27. We achieve skyline performance of $80.41$ F1 value on the ground-truth questions of the test dataset of HotPotQA."
],
[
"Our results in Table TABREF26 are in agreement with BIBREF3, BIBREF14, BIBREF30, which establish the fact that providing the answer tagging features as input leads to considerable improvement in the QG system's performance. Our SharedEncoder-QG model, which is a variant of our proposed MultiHop-QG model outperforms all the baselines state-of-the-art models except Semantic-Reinforced. The proposed MultiHop-QG model achieves the absolute improvement of $4.02$ and $3.18$ points compared to NQG and Max-out Pointer model, respectively, in terms of BLEU-4 metric.",
"To analyze the contribution of each component of the proposed model, we perform an ablation study reported in Table TABREF27. Our results suggest that providing multitask learning with shared encoder helps the model to improve the QG performance from $19.55$ to $20.64$ BLEU-4. Introducing the supporting facts information obtained from the answer-aware supporting fact prediction task further improves the QG performance from $20.64$ to $21.28$ BLEU-4. Joint training of QG with the supporting facts prediction provides stronger supervision for identifying and utilizing the supporting facts information. In other words, by sharing the document encoder between both the tasks, the network encodes better representation (supporting facts aware) of the input document. Such presentation is capable of efficiently filtering out the irrelevant information when processing multiple documents and performing multi-hop reasoning for question generation. Further, the MultiHop-Enhanced Reward (MER) with Rouge reward provides a considerable advancement on automatic evaluation metrics."
],
[
"We have shown the examples in Table TABREF31, where our proposed reward assists the model to maximize the uses of all the supporting facts to generate better human alike questions. In the first example, Rouge-L reward based model ignores the information `second czech composer' from the first supporting fact, whereas our MER reward based proposed model considers that to generate the question. Similarly, in the second example, our model considers the information `disused station located' from the supporting fact where the former model ignores it while generating the question. We also compare the questions generated from the NQG and our proposed method with the ground-truth questions.",
"Human Evaluation: For human evaluation, we directly compare the performance of the proposed approach with NQG model. We randomly sample 100 document-question-answer triplets from the test set and ask four professional English speakers to evaluate them. We consider three modalities: naturalness, which indicates the grammar and fluency; difficulty, which measures the document-question syntactic divergence and the reasoning needed to answer the question, and SF coverage similar to the metric discussed in Section SECREF4 except we replace the supporting facts prediction network with a human evaluator and we measure the relative supporting facts coverage compared to the ground-truth supporting facts. measure the relative coverage of supporting facts in the questions with respect to the ground-truth supporting facts. SF coverage provides a measure of the extent of supporting facts used for question generation. For the first two modalities, evaluators are asked to rate the performance of the question generator on a 1–5 scale (5 for the best). To estimate the SF coverage metric, the evaluators are asked to highlight the supporting facts from the documents based on the generated question.",
"We reported the average scores of all the human evaluator for each criteria in Table TABREF28. The proposed approach is able to generate better questions in terms of Difficulty, Naturalness and SF Coverage when compared to the NQG model."
],
[
"In this paper, we have introduced the multi-hop question generation task, which extends the natural language question generation paradigm to multiple document QA. Thereafter, we present a novel reward formulation to improve the multi-hop question generation using reinforcement and multi-task learning frameworks. Our proposed method performs considerably better than the state-of-the-art question generation systems on HotPotQA dataset. We also introduce SF Coverage, an evaluation metric to compare the performance of question generation systems based on their capacity to accumulate information from various documents. Overall, we propose a new direction for question generation research with several practical applications. In the future, we will be focusing on to improve the performance of multi-hop question generation without any strong supporting facts supervision."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Approach ::: Problem Statement:",
"Proposed Approach ::: Multi-Hop Question Generation Model",
"Proposed Approach ::: Multi-Hop Question Generation Model ::: Document and Answer Encoder",
"Proposed Approach ::: Multi-Hop Question Generation Model ::: Document and Answer Encoder ::: Answer Encoding:",
"Proposed Approach ::: Multi-Hop Question Generation Model ::: Document and Answer Encoder ::: Hierarchical Document Encoding:",
"Proposed Approach ::: Multi-Hop Question Generation Model ::: Multi-task Learning",
"Proposed Approach ::: Multi-Hop Question Generation Model ::: Question Decoder",
"Proposed Approach ::: Multi-Hop Question Generation Model ::: MultiHop-Enhanced QG",
"Proposed Approach ::: Multi-Hop Question Generation Model ::: MultiHop-Enhanced QG ::: MultiHop-Enhanced Reward (MER):",
"Proposed Approach ::: Multi-Hop Question Generation Model ::: MultiHop-Enhanced QG ::: Adaptive Self-critical Sequence Training:",
"Experimental Setup",
"Experimental Setup ::: Dataset:",
"Results and Analysis",
"Results and Analysis ::: Quantitative Analysis",
"Results and Analysis ::: Qualitative Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"02e5463c8fcf2856c844a9653f868addefda9352",
"bde4d8909c5e81bf0eb918e3a748195ddb9bb684"
],
"answer": [
{
"evidence": [
"Our results in Table TABREF26 are in agreement with BIBREF3, BIBREF14, BIBREF30, which establish the fact that providing the answer tagging features as input leads to considerable improvement in the QG system's performance. Our SharedEncoder-QG model, which is a variant of our proposed MultiHop-QG model outperforms all the baselines state-of-the-art models except Semantic-Reinforced. The proposed MultiHop-QG model achieves the absolute improvement of $4.02$ and $3.18$ points compared to NQG and Max-out Pointer model, respectively, in terms of BLEU-4 metric."
],
"extractive_spans": [
"the absolute improvement of $4.02$ and $3.18$ points compared to NQG and Max-out Pointer model, respectively, in terms of BLEU-4 metric"
],
"free_form_answer": "",
"highlighted_evidence": [
"The proposed MultiHop-QG model achieves the absolute improvement of $4.02$ and $3.18$ points compared to NQG and Max-out Pointer model, respectively, in terms of BLEU-4 metric."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: A relative performance (on test dataset of HotPotQA ) of different variants of the proposed method, by adding one model component.",
"FLOAT SELECTED: Table 4: Human evaluation results for our proposed approach and the NQG model. Naturalness and difficulty are rated on a 1–5 scale and SF coverage is in percentage (%)."
],
"extractive_spans": [],
"free_form_answer": "Automatic evaluation metrics show relative improvements of 11.11, 6.07, 19.29 for BLEU-4, ROUGE-L and SF Coverage respectively (over average baseline). \nHuman evaluation relative improvement for Difficulty, Naturalness and SF Coverage are 8.44, 32.64, 13.57 respectively.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: A relative performance (on test dataset of HotPotQA ) of different variants of the proposed method, by adding one model component.",
"FLOAT SELECTED: Table 4: Human evaluation results for our proposed approach and the NQG model. Naturalness and difficulty are rated on a 1–5 scale and SF coverage is in percentage (%)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"08922c1e01d82794835859a522fea4b3776fad39",
"8635caa27bd4b14e969ebc5d12bb73eda68a9d9f"
],
"answer": [
{
"evidence": [
"Human Evaluation: For human evaluation, we directly compare the performance of the proposed approach with NQG model. We randomly sample 100 document-question-answer triplets from the test set and ask four professional English speakers to evaluate them. We consider three modalities: naturalness, which indicates the grammar and fluency; difficulty, which measures the document-question syntactic divergence and the reasoning needed to answer the question, and SF coverage similar to the metric discussed in Section SECREF4 except we replace the supporting facts prediction network with a human evaluator and we measure the relative supporting facts coverage compared to the ground-truth supporting facts. measure the relative coverage of supporting facts in the questions with respect to the ground-truth supporting facts. SF coverage provides a measure of the extent of supporting facts used for question generation. For the first two modalities, evaluators are asked to rate the performance of the question generator on a 1–5 scale (5 for the best). To estimate the SF coverage metric, the evaluators are asked to highlight the supporting facts from the documents based on the generated question."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
" ",
" ",
"We randomly sample 100 document-question-answer triplets from the test set and ask four professional English speakers to evaluate them."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the HotPotQA BIBREF11 dataset to evaluate our methods. This dataset consists of over 113k Wikipedia-based question-answer pairs, with each question requiring multi-step reasoning across multiple supporting documents to infer the answer. While there exists other multi-hop datasets BIBREF9, BIBREF10, only HotPotQA dataset provides the sentence-level ground-truth labels to locate the supporting facts in the list of documents. We combine the training set ($90,564$) and development set ($7,405$) and randomly split the resulting data, with 80% for training, 10% for development, 10% for testing.",
"In multi-hop question answering, one has to reason over multiple relevant sentences from different paragraphs to answer a given question. We refer to these relevant sentences as supporting facts in the context. Hence, we frame Multi-hop question generation as the task of generating the question conditioned on the information gathered from reasoning over all the supporting facts across multiple paragraphs/documents. Since this task requires assembling and summarizing information from multiple relevant documents in contrast to a single sentence/paragraph, therefore, it is more challenging than the existing single-hop QG task. Further, the presence of irrelevant information makes it difficult to capture the supporting facts required for question generation. The explicit information about the supporting facts in the document is not often readily available, which makes the task more complex. In this work, we provide an alternative to get the supporting facts information from the document with the help of multi-task learning. Table TABREF1 gives sample examples from SQuAD and HotPotQA dataset. It is cleared from the example that the single-hop question is formed by focusing on a single sentence/document and answer, while in multi-hop question, multiple supporting facts from different documents and answer are accumulated to form the question."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"We use the HotPotQA BIBREF11 dataset to evaluate our methods.",
"Table TABREF1 gives sample examples from SQuAD and HotPotQA dataset. ",
"TABREF1 "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1d2b53ad46f1c9eedabdf71273ac780f9340c3f0",
"9dbcd0c0ff62dbf51d558e448547222ad602f351"
],
"answer": [
{
"evidence": [
"We use the HotPotQA BIBREF11 dataset to evaluate our methods. This dataset consists of over 113k Wikipedia-based question-answer pairs, with each question requiring multi-step reasoning across multiple supporting documents to infer the answer. While there exists other multi-hop datasets BIBREF9, BIBREF10, only HotPotQA dataset provides the sentence-level ground-truth labels to locate the supporting facts in the list of documents. We combine the training set ($90,564$) and development set ($7,405$) and randomly split the resulting data, with 80% for training, 10% for development, 10% for testing."
],
"extractive_spans": [
" over 113k Wikipedia-based question-answer pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"This dataset consists of over 113k Wikipedia-based question-answer pairs, with each question requiring multi-step reasoning across multiple supporting documents to infer the answer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the HotPotQA BIBREF11 dataset to evaluate our methods. This dataset consists of over 113k Wikipedia-based question-answer pairs, with each question requiring multi-step reasoning across multiple supporting documents to infer the answer. While there exists other multi-hop datasets BIBREF9, BIBREF10, only HotPotQA dataset provides the sentence-level ground-truth labels to locate the supporting facts in the list of documents. We combine the training set ($90,564$) and development set ($7,405$) and randomly split the resulting data, with 80% for training, 10% for development, 10% for testing."
],
"extractive_spans": [
"113k Wikipedia-based question-answer pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"This dataset consists of over 113k Wikipedia-based question-answer pairs, with each question requiring multi-step reasoning across multiple supporting documents to infer the answer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How much did the model outperform",
"What language is in the dataset?",
"How big is the HotPotQA dataset?"
],
"question_id": [
"014830892d93e3c01cb659ad31c90de4518d48f3",
"ae7c5cf9c2c121097eb00d389cfd7cc2a5a7d577",
"af948ea91136c700957b438d927f58d9b051c97c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Architecture of our proposed Multi-hop QG network. The inputs to the model are the document word embeddings and answer position (AP) features. Question generation and answer-aware supporting facts prediction model (left) jointly train the shared document encoder (Bi-LSTM) layer. The image on the right depicts our question-aware supporting facts prediction network, which is our MultiHop-Enhanced Reward function. The inputs to this model are the generated question (output of multi-hop QG network) and a list of documents.",
"Table 2: Performance comparison between proposed approach and state-of-the-art QG models on the test set of HotPotQA. Here s2s: sequence-to-sequence, s2s+copy: s2s with copy mechanism (See et al., 2017), s2s+answer: s2s with answer encoding.",
"Table 3: A relative performance (on test dataset of HotPotQA ) of different variants of the proposed method, by adding one model component.",
"Table 4: Human evaluation results for our proposed approach and the NQG model. Naturalness and difficulty are rated on a 1–5 scale and SF coverage is in percentage (%)."
],
"file": [
"4-Figure1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png"
]
} | [
"How much did the model outperform",
"What language is in the dataset?"
] | [
[
"2004.02143-Results and Analysis ::: Quantitative Analysis-0",
"2004.02143-7-Table3-1.png",
"2004.02143-7-Table4-1.png"
],
[
"2004.02143-Results and Analysis ::: Qualitative Analysis-1",
"2004.02143-Introduction-2",
"2004.02143-Experimental Setup ::: Dataset:-0"
]
] | [
"Automatic evaluation metrics show relative improvements of 11.11, 6.07, 19.29 for BLEU-4, ROUGE-L and SF Coverage respectively (over average baseline). \nHuman evaluation relative improvement for Difficulty, Naturalness and SF Coverage are 8.44, 32.64, 13.57 respectively.",
"English"
] | 221 |
1704.04451 | Optimizing Differentiable Relaxations of Coreference Evaluation Metrics | Coreference evaluation metrics are hard to optimize directly as they are non-differentiable functions, not easily decomposable into elementary decisions. Consequently, most approaches optimize objectives only indirectly related to the end goal, resulting in suboptimal performance. Instead, we propose a differentiable relaxation that lends itself to gradient-based optimisation, thus bypassing the need for reinforcement learning or heuristic modification of cross-entropy. We show that by modifying the training objective of a competitive neural coreference system, we obtain a substantial gain in performance. This suggests that our approach can be regarded as a viable alternative to using reinforcement learning or more computationally expensive imitation learning. | {
"paragraphs": [
[
"Coreference resolution is the task of identifying all mentions which refer to the same entity in a document. It has been shown beneficial in many natural language processing (NLP) applications, including question answering BIBREF0 and information extraction BIBREF1 , and often regarded as a prerequisite to any text understanding task.",
"Coreference resolution can be regarded as a clustering problem: each cluster corresponds to a single entity and consists of all its mentions in a given text. Consequently, it is natural to evaluate predicted clusters by comparing them with the ones annotated by human experts, and this is exactly what the standard metrics (e.g., MUC, B INLINEFORM0 , CEAF) do. In contrast, most state-of-the-art systems are optimized to make individual co-reference decisions, and such losses are only indirectly related to the metrics.",
"One way to deal with this challenge is to optimize directly the non-differentiable metrics using reinforcement learning (RL), for example, relying on the REINFORCE policy gradient algorithm BIBREF2 . However, this approach has not been very successful, which, as suggested by clark-manning:2016:EMNLP2016, is possibly due to the discrepancy between sampling decisions at training time and choosing the highest ranking ones at test time. A more successful alternative is using a `roll-out' stage to associate cost with possible decisions, as in clark-manning:2016:EMNLP2016, but it is computationally expensive. Imitation learning BIBREF3 , BIBREF4 , though also exploiting metrics, requires access to an expert policy, with exact policies not directly computable for the metrics of interest.",
"In this work, we aim at combining the best of both worlds by proposing a simple method that can turn popular coreference evaluation metrics into differentiable functions of model parameters. As we show, this function can be computed recursively using scores of individual local decisions, resulting in a simple and efficient estimation procedure. The key idea is to replace non-differentiable indicator functions (e.g. the member function INLINEFORM0 ) with the corresponding posterior probabilities ( INLINEFORM1 ) computed by the model. Consequently, non-differentiable functions used within the metrics (e.g. the set size function INLINEFORM2 ) become differentiable ( INLINEFORM3 ). Though we assume that the scores of the underlying statistical model can be used to define a probability model, we show that this is not a serious limitation. Specifically, as a baseline we use a probabilistic version of the neural mention-ranking model of P15-1137, which on its own outperforms the original one and achieves similar performance to its global version BIBREF5 . Importantly when we use the introduced differentiable relaxations in training, we observe a substantial gain in performance over our probabilistic baseline. Interestingly, the absolute improvement (+0.52) is higher than the one reported in clark-manning:2016:EMNLP2016 using RL (+0.05) and the one using reward rescaling (+0.37). This suggests that our method provides a viable alternative to using RL and reward rescaling.",
"The outline of our paper is as follows: we introduce our neural resolver baseline and the B INLINEFORM0 and LEA metrics in Section SECREF2 . Our method to turn a mention ranking resolver into an entity-centric resolver is presented in Section SECREF3 , and the proposed differentiable relaxations in Section SECREF4 . Section SECREF5 shows our experimental results."
],
[
"In this section we introduce neural mention ranking, the framework which underpins current state-of-the-art models BIBREF6 . Specifically, we consider a probabilistic version of the method proposed by P15-1137. In experiments we will use it as our baseline.",
"Let INLINEFORM0 be the list of mentions in a document. For each mention INLINEFORM1 , let INLINEFORM2 be the index of the mention that INLINEFORM3 is coreferent with (if INLINEFORM4 , INLINEFORM5 is the first mention of some entity appearing in the document). As standard in coreference resolution literature, we will refer to INLINEFORM6 as an antecedent of INLINEFORM7 . Then, in mention ranking the goal is to score antecedents of a mention higher than any other mentions, i.e., if INLINEFORM8 is the scoring function, we require INLINEFORM9 for all INLINEFORM10 such that INLINEFORM11 and INLINEFORM12 are coreferent but INLINEFORM13 and INLINEFORM14 are not.",
"Let INLINEFORM0 and INLINEFORM1 be respectively features of INLINEFORM2 and features of pair INLINEFORM3 . The scoring function is defined by: INLINEFORM4 ",
"where INLINEFORM0 ",
" and INLINEFORM0 are real vectors and matrices with proper dimensions, INLINEFORM1 are real scalars.",
"Unlike P15-1137, where the max-margin loss is used, we define a probabilistic model. The probability that INLINEFORM0 and INLINEFORM1 are coreferent is given by DISPLAYFORM0 ",
"Following D13-1203 we use the following softmax-margin BIBREF8 loss function: INLINEFORM0 ",
"where INLINEFORM0 are model parameters, INLINEFORM1 is the set of the indices of correct antecedents of INLINEFORM2 , and INLINEFORM3 . INLINEFORM4 is a cost function used to manipulate the contribution of different error types to the loss function: INLINEFORM5 ",
"The error types are “false anaphor”, “false new”, “wrong link”, and “no mistake”, respectively. In our experiments, we borrow their values from D13-1203: INLINEFORM0 . In the subsequent discussion, we refer to the loss as mention-ranking heuristic cross entropy."
],
[
"We use five most popular metrics,",
"MUC BIBREF9 ,",
"B INLINEFORM0 BIBREF10 ,",
"CEAF INLINEFORM0 , CEAF INLINEFORM1 BIBREF11 ,",
"BLANC BIBREF12 ,",
"LEA BIBREF13 .",
"for evaluation. However, because MUC is the least discriminative metric BIBREF13 , whereas CEAF is slow to compute, out of the five most popular metrics we incorporate into our loss only B INLINEFORM0 . In addition, we integrate LEA, as it has been shown to provide a good balance between discriminativity and interpretability.",
"Let INLINEFORM0 and INLINEFORM1 be the gold-standard entity set and an entity set given by a resolver. Recall that an entity is a set of mentions. The recall and precision of the B INLINEFORM2 metric is computed by: INLINEFORM3 ",
" The LEA metric is computed as: INLINEFORM0 ",
" where INLINEFORM0 is the number of coreference links in entity INLINEFORM1 . INLINEFORM2 , for both metrics, is defined by: INLINEFORM3 ",
" INLINEFORM0 is used in the standard evaluation."
],
[
"Mention-ranking resolvers do not explicitly provide information about entities/clusters which is required by B INLINEFORM0 and LEA. We therefore propose a simple solution that can turn a mention-ranking resolver into an entity-centric one.",
"First note that in a document containing INLINEFORM0 mentions, there are INLINEFORM1 potential entities INLINEFORM2 where INLINEFORM3 has INLINEFORM4 as the first mention. Let INLINEFORM5 be the probability that mention INLINEFORM6 corresponds to entity INLINEFORM7 . We now show that it can be computed recursively based on INLINEFORM8 as follows: INLINEFORM9 ",
" In other words, if INLINEFORM0 , we consider all possible INLINEFORM1 with which INLINEFORM2 can be coreferent, and which can correspond to entity INLINEFORM3 . If INLINEFORM4 , the link to be considered is the INLINEFORM5 's self-link. And, if INLINEFORM6 , the probability is zero, as it is impossible for INLINEFORM7 to be assigned to an entity introduced only later. See Figure FIGREF13 for extra information.",
"We now turn to two crucial questions about this formula:",
"The first question is answered in Proposition SECREF16 . The second question is important because, intuitively, when a mention INLINEFORM0 is anaphoric, the potential entity INLINEFORM1 does not exist. We will show that the answer is “No” by proving in Proposition SECREF17 that the probability that INLINEFORM2 is anaphoric is always higher than any probability that INLINEFORM3 , INLINEFORM4 refers to INLINEFORM5 .",
"Proposition 1 INLINEFORM0 is a valid probability distribution, i.e., INLINEFORM1 , for all INLINEFORM2 .",
"We prove this proposition by induction.",
"Basis: it is obvious that INLINEFORM0 .",
"Assume that INLINEFORM0 for all INLINEFORM1 . Then, INLINEFORM2 ",
" Because INLINEFORM0 for all INLINEFORM1 , this expression is equal to INLINEFORM2 ",
"Therefore, INLINEFORM0 ",
"(according to Equation EQREF5 ).",
"Proposition 2 INLINEFORM0 for all INLINEFORM1 .",
"We prove this proposition by induction.",
"Basis: for INLINEFORM0 , INLINEFORM1 ",
"Assume that INLINEFORM0 for all INLINEFORM1 and INLINEFORM2 . Then INLINEFORM3 "
],
[
"Having INLINEFORM0 computed, we can consider coreference resolution as a multiclass prediction problem. An entity-centric heuristic cross entropy loss is thus given below: INLINEFORM1 ",
"where INLINEFORM0 is the correct entity that INLINEFORM1 belongs to, INLINEFORM2 . Similar to INLINEFORM3 in the mention-ranking heuristic loss in Section SECREF2 , INLINEFORM4 is a cost function used to manipulate the contribution of the four different error types (“false anaphor”, “false new”, “wrong link”, and “no mistake”): INLINEFORM5 "
],
[
"There are two functions used in computing B INLINEFORM0 and LEA: the set size function INLINEFORM1 and the link function INLINEFORM2 . Because both of them are non-differentiable, the two metrics are non-differentiable. We thus need to make these two functions differentiable.",
"There are two remarks. Firstly, both functions can be computed using the indicator function INLINEFORM0 : INLINEFORM1 ",
" Secondly, given INLINEFORM0 , the indicator function INLINEFORM1 , INLINEFORM2 is the converging point of the following softmax as INLINEFORM3 (see Figure FIGREF19 ): INLINEFORM4 ",
"where INLINEFORM0 is called temperature BIBREF14 .",
"Therefore, we propose to represent each INLINEFORM0 as a soft-cluster: INLINEFORM1 ",
"where, as defined in Section SECREF3 , INLINEFORM0 is the potential entity that has INLINEFORM1 as the first mention. Replacing the indicator function INLINEFORM2 by the probability distribution INLINEFORM3 , we then have a differentiable version for the set size function and the link function: INLINEFORM4 ",
" INLINEFORM0 and INLINEFORM1 are computed similarly with the constraint that only mentions in INLINEFORM2 are taken into account. Plugging these functions into precision and recall of B INLINEFORM3 and LEA in Section SECREF6 , we obtain differentiable INLINEFORM4 and INLINEFORM5 , which are then used in two loss functions: INLINEFORM6 ",
" where INLINEFORM0 is the hyper-parameter of the INLINEFORM1 regularization terms.",
"It is worth noting that, as INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . Therefore, when training a model with the proposed losses, we can start at a high temperature (e.g., INLINEFORM3 ) and anneal to a small but non-zero temperature. However, in our experiments we fix INLINEFORM4 . Annealing is left for future work."
],
[
"We now demonstrate how to use the proposed differentiable B INLINEFORM0 and LEA to train a coreference resolver. The source code and trained models are available at https://github.com/lephong/diffmetric_coref."
],
[
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats. The split provided in the CoNLL 2012 shared task is used. In all our resolvers, we use not the original features of P15-1137 but their slight modification described in N16-1114 (section 6.1)."
],
[
"We build following baseline and three resolvers:",
"baseline: the resolver presented in Section SECREF2 . We use the identical configuration as in N16-1114: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 (where INLINEFORM3 are respectively the numbers of mention features and pair-wise features). We also employ their pretraining methodology.",
" INLINEFORM0 : the resolver using the entity-centric cross entropy loss introduced in Section SECREF18 . We set INLINEFORM1 .",
" INLINEFORM0 and INLINEFORM1 : the resolvers using the losses proposed in Section SECREF4 . INLINEFORM2 is tuned on the development set by trying each value in INLINEFORM3 .",
"To train these resolvers we use AdaGrad BIBREF16 to minimize their loss functions with the learning rate tuned on the development set and with one-document mini-batches. Note that we use the baseline as the initialization point to train the other three resolvers."
],
[
"We firstly compare our resolvers against P15-1137 and N16-1114. Results are shown in the first half of Table TABREF25 . Our baseline surpasses P15-1137. It is likely due to using features from N16-1114. Using the entity-centric heuristic cross entropy loss and the relaxations are clearly beneficial: INLINEFORM0 is slightly better than our baseline and on par with the global model of N16-1114. INLINEFORM1 outperform the baseline, the global model of N16-1114, and INLINEFORM2 . However, the best values of INLINEFORM3 are INLINEFORM4 , INLINEFORM5 respectively for INLINEFORM6 , and INLINEFORM7 . Among these resolvers, INLINEFORM8 achieves the highest F INLINEFORM9 scores across all the metrics except BLANC.",
"When comparing to clark-manning:2016:EMNLP2016 (the second half of Table TABREF25 ), we can see that the absolute improvement over the baselines (i.e. `heuristic loss' for them and the heuristic cross entropy loss for us) is higher than that of reward rescaling but with much shorter training time: INLINEFORM0 (7 days) and INLINEFORM1 (15 hours) on the CoNLL metric for clark-manning:2016:EMNLP2016 and ours, respectively. It is worth noting that our absolute scores are weaker than these of clark-manning:2016:EMNLP2016, as they build on top of a similar but stronger mention-ranking baseline, which employs deeper neural networks and requires a much larger number of epochs to train (300 epochs, including pretraining). For the purpose of illustrating the proposed losses, we started with a simpler model by P15-1137 which requires a much smaller number of epochs, thus faster, to train (20 epochs, including pretraining)."
],
[
"Table TABREF28 shows the breakdown of errors made by the baseline and our resolvers on the development set. The proposed resolvers make fewer “false anaphor” and “wrong link” errors but more “false new” errors compared to the baseline. This suggests that loss optimization prevents over-clustering, driving the precision up: when antecedents are difficult to detect, the self-link (i.e., INLINEFORM0 ) is chosen. When INLINEFORM1 increases, they make more “false anaphor” and “wrong link” errors but less “false new” errors.",
"In Figure FIGREF29 (a) the baseline, but not INLINEFORM0 nor INLINEFORM1 , mistakenly links INLINEFORM2 [it] with INLINEFORM3 [the virus]. Under-clustering, on the other hand, is a problem for our resolvers with INLINEFORM4 : in example (b), INLINEFORM5 missed INLINEFORM6 [We]. This behaviour results in a reduced recall but the recall is not damaged severely, as we still obtain a better INLINEFORM7 score. We conjecture that this behaviour is a consequence of using the INLINEFORM8 score in the objective, and, if undesirable, F INLINEFORM9 with INLINEFORM10 can be used instead. For instance, also in Figure FIGREF29 , INLINEFORM11 correctly detects INLINEFORM12 [it] as non-anaphoric and links INLINEFORM13 [We] with INLINEFORM14 [our].",
"Figure FIGREF30 shows recall, precision, F INLINEFORM0 (average of MUC, B INLINEFORM1 , CEAF INLINEFORM2 ), on the development set when training with INLINEFORM3 and INLINEFORM4 . As expected, higher values of INLINEFORM5 yield lower precisions but higher recalls. In contrast, F INLINEFORM6 increases until reaching the highest point when INLINEFORM7 for INLINEFORM8 ( INLINEFORM9 for INLINEFORM10 ), it then decreases gradually."
],
[
"Because the resolvers are evaluated on F INLINEFORM0 score metrics, it should be that INLINEFORM1 and INLINEFORM2 perform the best with INLINEFORM3 . Figure FIGREF30 and Table TABREF25 however do not confirm that: INLINEFORM4 should be set with values a little bit larger than 1. There are two hypotheses. First, the statistical difference between the training set and the development set leads to the case that the optimal INLINEFORM5 on one set can be sub-optimal on the other set. Second, in our experiments we fix INLINEFORM6 , meaning that the relaxations might not be close to the true evaluation metrics enough. Our future work, to confirm/reject this, is to use annealing, i.e., gradually decreasing INLINEFORM7 down to (but larger than) 0.",
"Table TABREF25 shows that the difference between INLINEFORM0 and INLINEFORM1 in terms of accuracy is not substantial (although the latter is slightly better than the former). However, one should expect that INLINEFORM2 would outperform INLINEFORM3 on B INLINEFORM4 metric while it would be the other way around on LEA metric. It turns out that, B INLINEFORM5 and LEA behave quite similarly in non-extreme cases. We can see that in Figure 2, 4, 5, 6, 7 in moosavi-strube:2016:P16-1."
],
[
"Mention ranking and entity centricity are two main streams in the coreference resolution literature. Mention ranking BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 considers local and independent decisions when choosing a correct antecedent for a mention. This approach is computationally efficient and currently dominant with state-of-the-art performance BIBREF5 , BIBREF6 . P15-1137 propose to use simple neural networks to compute mention ranking scores and to use a heuristic loss to train the model. N16-1114 extend this by employing LSTMs to compute mention-chain representations which are then used to compute ranking scores. They call these representations global features. clark-manning:2016:EMNLP2016 build a similar resolver as in P15-1137 but much stronger thanks to deeper neural networks and “better mention detection, more effective, hyperparameters, and more epochs of training”. Furthermore, using reward rescaling they achieve the best performance in the literature on the English and Chinese portions of the CoNLL 2012 dataset. Our work is built upon mention ranking by turning a mention-ranking model into an entity-centric one. It is worth noting that although we use the model proposed by P15-1137, any mention-ranking models can be employed.",
"Entity centricity BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , on the other hand, incorporates entity-level information to solve the problem. The approach can be top-down as in haghighi2010coreference where they propose a generative model. It can also be bottom-up by merging smaller clusters into bigger ones as in clark-manning:2016:P16-1. The method proposed by ma-EtAl:2014:EMNLP2014 greedily and incrementally adds mentions to previously built clusters using a prune-and-score technique. Importantly, employing imitation learning these two methods can optimize the resolvers directly on evaluation metrics. Our work is similar to ma-EtAl:2014:EMNLP2014 in the sense that our resolvers incrementally add mentions to previously built clusters. However, different from both ma-EtAl:2014:EMNLP2014,clark-manning:2016:P16-1, our resolvers do not use any discrete decisions (e.g., merge operations). Instead, they seamlessly compute the probability that a mention refers to an entity from mention-ranking probabilities, and are optimized on differentiable relaxations of evaluation metrics.",
"Using differentiable relaxations of evaluation metrics as in our work is related to a line of research in reinforcement learning where a non-differentiable action-value function is replaced by a differentiable critic BIBREF26 , BIBREF27 . The critic is trained so that it is as close to the true action-value function as possible. This technique is applied to machine translation BIBREF28 where evaluation metrics (e.g., BLUE) are non-differentiable. A disadvantage of using critics is that there is no guarantee that the critic converges to the true evaluation metric given finite training data. In contrast, our differentiable relaxations do not need to train, and the convergence is guaranteed as INLINEFORM0 ."
],
[
"We have proposed",
"Experimental results show that our approach outperforms the resolver by N16-1114, and gains a higher improvement over the baseline than that of clark-manning:2016:EMNLP2016 but with much shorter training time."
],
[
"We would like to thank Raquel Fernández, Wilker Aziz, Nafise Sadat Moosavi, and anonymous reviewers for their suggestions and comments. The project was supported by the European Research Council (ERC StG BroadSem 678254), the Dutch National Science Foundation (NWO VIDI 639.022.518) and an Amazon Web Services (AWS) grant."
]
],
"section_name": [
"Introduction",
"Neural mention ranking",
"Evaluation Metrics",
"From mention ranking to entity centricity",
"Entity-centric heuristic cross entropy loss",
"From non-differentiable metrics to differentiable losses",
"Experiments",
"Setup",
"Resolvers",
"Results",
"Analysis",
"Discussion",
"Related work",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"128eef39246d7cb10ef32e186e28896d3ef17f97",
"494ff2d2ce1a57872c4f4077617e51190ee4cf75"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Results (F1) on CoNLL 2012 test set. CoNLL is the average of MUC, B3, and CEAFe."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results (F1) on CoNLL 2012 test set. CoNLL is the average of MUC, B3, and CEAFe."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Results (F1) on CoNLL 2012 test set. CoNLL is the average of MUC, B3, and CEAFe."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results (F1) on CoNLL 2012 test set. CoNLL is the average of MUC, B3, and CEAFe."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a64fd4bde98f06b86265d769ece0f7532e9c299b",
"b72614968c6e5211ada486bf45d0f6b3b8980856"
],
"answer": [
{
"evidence": [
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats. The split provided in the CoNLL 2012 shared task is used. In all our resolvers, we use not the original features of P15-1137 but their slight modification described in N16-1114 (section 6.1)."
],
"extractive_spans": [
"3,492 documents"
],
"free_form_answer": "",
"highlighted_evidence": [
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats. The split provided in the CoNLL 2012 shared task is used. In all our resolvers, we use not the original features of P15-1137 but their slight modification described in N16-1114 (section 6.1)."
],
"extractive_spans": [],
"free_form_answer": "3492",
"highlighted_evidence": [
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0322bdc694eba9fe8e9fdab04e13e773d4ce4ee5",
"abe7d32d79e403047ae434f7bf54f421406c6dd7"
],
"answer": [
{
"evidence": [
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats. The split provided in the CoNLL 2012 shared task is used. In all our resolvers, we use not the original features of P15-1137 but their slight modification described in N16-1114 (section 6.1)."
],
"extractive_spans": [
"CoNLL 2012"
],
"free_form_answer": "",
"highlighted_evidence": [
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats. The split provided in the CoNLL 2012 shared task is used. In all our resolvers, we use not the original features of P15-1137 but their slight modification described in N16-1114 (section 6.1)."
],
"extractive_spans": [
"English portion of CoNLL 2012 data BIBREF15"
],
"free_form_answer": "",
"highlighted_evidence": [
"We run experiments on the English portion of CoNLL 2012 data BIBREF15 which consists of 3,492 documents in various domains and formats."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they compare against Reinforment-Learning approaches?",
"How long is the training dataset?",
"What dataset do they use?"
],
"question_id": [
"179bc57b7b5231ea6ad3e93993a6935dda679fa2",
"a59e86a15405c8a11890db072b99fda3173e5ab2",
"9489b0ecb643c1fc95c001c65d4e9771315989aa"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: For each mention mu there is a potential entity Eu so that mu is the first mention in the chain. Computing p(mi ∈ Eu), u < i takes into the account all directed paths from mi to Eu (black arrows). Noting that there is no directed path from any mk, k < u to Eu because p(mk ∈ Eu) = 0. (See text for more details.)",
"Figure 2: Softmax exp{πi/T}∑ j exp{πj/T} with different values of T . The softmax becomes more peaky when the value of T gets smaller. As T → 0 the softmax converges to the indicator function that chooses arg maxi πi.",
"Figure 3: Example predictions: the subscript before a mention is its index. The superscript / subscript after a mention indicates the antecedent predicted by the baseline / Lβ=1,B3 , Lβ= √ 1.4,B3 . Mentions with the same color are true coreferents. “*”s mark incorrect decisions.",
"Table 1: Results (F1) on CoNLL 2012 test set. CoNLL is the average of MUC, B3, and CEAFe.",
"Table 2: Number of: “false anaphor” (FA, a non-anaphoric mention marked as anaphoric), “false new” (FN, an anaphoric mention marked as non-anaphoric), and “wrong link” (WL, an anaphoric mention is linked to a wrong antecedent) errors on the development set.",
"Figure 4: Recall, precision, F1 (average of MUC, B3, CEAFe), on the development set when training with Lβ,B3 (left) and Lβ,LEA (right). Higher values of β yield lower precisions but higher recalls."
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Figure4-1.png"
]
} | [
"How long is the training dataset?"
] | [
[
"1704.04451-Setup-0"
]
] | [
"3492"
] | 223 |
1908.11053 | Leveraging Frequent Query Substructures to Generate Formal Queries for Complex Question Answering | Formal query generation aims to generate correct executable queries for question answering over knowledge bases (KBs), given entity and relation linking results. Current approaches build universal paraphrasing or ranking models for the whole questions, which are likely to fail in generating queries for complex, long-tail questions. In this paper, we propose SubQG, a new query generation approach based on frequent query substructures, which helps rank the existing (but nonsignificant) query structures or build new query structures. Our experiments on two benchmark datasets show that our approach significantly outperforms the existing ones, especially for complex questions. Also, it achieves promising performance with limited training data and noisy entity/relation linking results. | {
"paragraphs": [
[
"Knowledge-based question answering (KBQA) aims to answer natural language questions over knowledge bases (KBs) such as DBpedia and Freebase. Formal query generation is an important component in many KBQA systems BIBREF0 , BIBREF1 , BIBREF2 , especially for answering complex questions. Given entity and relation linking results, formal query generation aims to generate correct executable queries, e.g., SPARQL queries, for the input natural language questions. An example question and its formal query are shown in Figure FIGREF1 . Generally speaking, formal query generation is expected to include but not be limited to have the capabilities of (i) recognizing and paraphrasing different kinds of constraints, including triple-level constraints (e.g., “movies\" corresponds to a typing constraint for the target variable) and higher level constraints (e.g., subgraphs). For instance, “the same ... as\" represents a complex structure shown in the middle of Figure FIGREF1 ; (ii) recognizing and paraphrasing aggregations (e.g., “how many\" corresponds to Count); and (iii) organizing all the above to generate an executable query BIBREF3 , BIBREF4 .",
"There are mainly two kinds of query generation approaches for complex questions. (i) Template-based approaches choose a pre-collected template for query generation BIBREF1 , BIBREF5 . Such approaches highly rely on the coverage of templates, and perform unstably when some complex templates have very few natural language questions as training data. (ii) Approaches based on semantic parsing and neural networks learn entire representations for questions with different query structures, by using a neural network following the encode-and-compare framework BIBREF2 , BIBREF4 . They may suffer from the lack of training data, especially for long-tail questions with rarely appeared structures. Furthermore, both above approaches cannot handle questions with unseen query structures, since they cannot generate new query structures.",
"To cope with the above limitations, we propose a new query generation approach based on the following observation: the query structure for a complex question may rarely appear, but it usually contains some substructures that frequently appeared in other questions. For example, the query structure for the question in Figure FIGREF1 appears rarely, however, both “how many movies\" and “the same ... as\" are common expressions, which correspond to the two query substructures in dashed boxes. To collect such frequently appeared substructures, we automatically decompose query structures in the training data. Instead of directly modeling the query structure for the given question as a whole, we employ multiple neural networks to predict query substructures contained in the question, each of which delivers a part of the query intention. Then, we select an existing query structure for the input question by using a combinational ranking function. Also, in some cases, no existing query structure is appropriate for the input question. To cope with this issue, we merge query substructures to build new query structures. The contributions of this paper are summarized below:"
],
[
"An entity is typically denoted by a URI and described with a set of properties and values. A fact is an INLINEFORM0 triple, where the value can be either a literal or another entity. A KB is a pair INLINEFORM1 , where INLINEFORM2 denotes the set of entities and INLINEFORM3 denotes the set of facts.",
"A formal query (or simply called query) is the structured representation of a natural language question executable on a given KB. Formally, a query is a pair INLINEFORM0 , where INLINEFORM1 denotes the set of vertices, and INLINEFORM2 denotes the set of labeled edges. A vertex can be either a variable, an entity or a literal, and the label of an edge can be either a built-in property or a user-defined one. For simplicity, the set of all edge labels are denoted by INLINEFORM3 . In this paper, the built-in properties include Count, Avg, Max, Min, MaxAtN, MinAtN and IsA (rdf:type), where the former four are used to connect two variables. For example, INLINEFORM4 represents that INLINEFORM5 is the counting result of INLINEFORM6 . MaxAtN and MinAtN take the meaning of Order By in SPARQL BIBREF0 . For instance, INLINEFORM7 means Order By Desc INLINEFORM8 Limit 1 Offset 1.",
"To classify various queries with similar query intentions and narrow the search space for query generation, we introduce the notion of query structures. A query structure is a set of structurally-equivalent queries. Let INLINEFORM0 and INLINEFORM1 denote two queries. INLINEFORM2 is structurally-equivalent to INLINEFORM3 , denoted by INLINEFORM4 , if and only if there exist two bijections INLINEFORM5 and INLINEFORM6 such that:",
"The query structure for INLINEFORM0 is denoted by INLINEFORM1 , which contains all the queries structurally-equivalent to INLINEFORM2 . For graphical illustration, we represent a query structure by a representative query among the structurally-equivalent ones and replace entities and literals with different kinds of placeholders. An example of query and query structure is shown in the upper half of Figure FIGREF9 .",
"For many simple questions, two query structures, i.e., INLINEFORM0 INLINEFORM1 and INLINEFORM2 INLINEFORM3 , are sufficient. However, for complex questions, a diversity of query structures exist and some of them share a set of frequently-appeared substructures, each of which delivers a part of the query intention. We give the definition of query substructures as follows.",
"Let INLINEFORM0 and INLINEFORM1 denote two query structures. INLINEFORM2 is a query substructure of INLINEFORM3 , denoted by INLINEFORM4 , if and only if INLINEFORM5 has a subgraph INLINEFORM6 such that INLINEFORM7 . Furthermore, if INLINEFORM8 , we say that INLINEFORM9 has INLINEFORM10 , and INLINEFORM11 is contained in INLINEFORM12 .",
"For example, although the query structures for the two questions in Figures FIGREF1 and FIGREF9 are different, they share the same query substructure INLINEFORM0 INLINEFORM1 INLINEFORM2 , which corresponds to the phrase “how many movies\". Note that, a query substructure can be the query structure of another question.",
"The goal of this paper is to leverage a set of frequent query (sub-)structures to generate formal queries for answering complex questions."
],
[
"In this section, we present our approach, SubQG, for query generation. We first introduce the framework and general steps with a running example (Section SECREF10 ), and then describe some important steps in detail in the following subsections."
],
[
"Figure FIGREF11 depicts the framework of SubQG, which contains an offline training process and an online query generation process.",
"Offline. The offline process takes as input a set of training data in form of INLINEFORM0 pairs, and mainly contains three steps:",
"1. Collect query structures. For questions in the training data, we first discover the structurally-equivalent queries, and then extract the set of all query structures, denoted by INLINEFORM0 .",
"2. Collect frequent query substructures. We decompose each query structure INLINEFORM0 to get the set for all query substructures. Let INLINEFORM1 be a non-empty subset of INLINEFORM2 , and INLINEFORM3 be the set of vertices used in INLINEFORM4 . INLINEFORM5 should be a query substructure of INLINEFORM6 according to the definition. So, we can generate all query substructures of INLINEFORM7 from each subset of INLINEFORM8 . Disconnected query substructures would be ignored since they express discontinuous meanings and should be split into smaller query substructures. If more than INLINEFORM9 queries in training data have substructure INLINEFORM10 , we consider INLINEFORM11 as a frequent query substructure. The set for all frequent query substructures is denoted by INLINEFORM12 .",
"3. Train query substructure predictors. We train a neural network for each query substructure INLINEFORM0 , to predict the probability that INLINEFORM1 has INLINEFORM2 (i.e., INLINEFORM3 ) for input question INLINEFORM4 , where INLINEFORM5 denotes the formal query for INLINEFORM6 . Details for this step are described in Section SECREF13 .",
"Online. The online query generation process takes as input a natural language question INLINEFORM0 , and mainly contains four steps:",
"1. Predict query substructures. We first predict the probability that INLINEFORM0 for each INLINEFORM1 , using the query substructure predictors trained in the offline step. An example question and four query substructures with highest prediction probabilities are shown in the top of Figure FIGREF12 .",
"2. Rank existing query structures. To find an appropriate query structure for the input question, we rank existing query structures ( INLINEFORM0 ) by using a scoring function, see Section SECREF20 .",
"3. Merge query substructures. Consider the fact that the target query structure INLINEFORM0 may not appear in INLINEFORM1 (i.e., there is no query in the training data that is structurally-equivalent to INLINEFORM2 ), we design a method (described in Section SECREF22 ) to merge question-contained query substructures for building new query structures. The merged results are ranked using the same function as existing query structures. Several query structures (including the merged results and the existing query structures) for the example question are shown in the middle of Figure FIGREF12 .",
"4. Grounding and validation. We leverage the query structure ranking result, alongside with the entity/relation linking result from some existing black box systems BIBREF6 to generate executable formal query for the input question. For each query structure, we try all possible combinations of the linking results according to the descending order of the overall linking score, and perform validation including grammar check, domain/range check and empty query check. The first non-empty query passing all validations is considered as the output for SubQG. The grounding and validation results for the example question are shown in the bottom of Figure FIGREF12 ."
],
[
"In this step, we employ an attention based Bi-LSTM network BIBREF7 to predict INLINEFORM0 for each frequent query substructure INLINEFORM1 , where INLINEFORM2 represents the probability of INLINEFORM3 . There are mainly three reasons that we use a predictor for each query substructure instead of a multi-tag predictor for all query substructures: (i) a query substructure usually expresses part of the meaning of input question. Different query substructures may focus on different words or phrases, thus, each predictor should have its own attention matrix; (ii) multi-tag predictor may have a lower accuracy since each tag has unbalanced training data; (iii) single pre-trained query substructure predictor from one dataset can be directly reused on another without adjusting the network structure, however, the multi-tag predictor need to adjust the size of the output layer and retrain when the set of frequent query substructures changes.",
"The structure of the network is shown in Figure FIGREF14 . Before the input question is fed into the network, we replace all entity mentions with INLINEFORM0 Entity INLINEFORM1 using EARL BIBREF6 , to enhance the generalization ability. Given the question word sequence { INLINEFORM2 }, we first use a word embedding matrix to convert the original sequence into word vectors { INLINEFORM3 }, followed by a BiLSTM network to generate the context-sensitive representation { INLINEFORM4 } for each word, where DISPLAYFORM0 ",
"Then, the attention mechanism takes each INLINEFORM0 as input, and calculates a weight INLINEFORM1 for each INLINEFORM2 , which is formulated as follows: DISPLAYFORM0 ",
" where INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . Next, we get the representation for the whole question INLINEFORM3 as the weighted sum of INLINEFORM4 : DISPLAYFORM0 ",
"The output of the network is a probability DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 .",
"The loss function minimized during training is the binary cross-entropy: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the set of training data."
],
[
"In this step, we use a combinational function to score each query structure in the training data for the input question. Since the prediction result for each query substructure is independent, the score for query structure INLINEFORM0 is measured by joint probability, which is DISPLAYFORM0 ",
"Assume that INLINEFORM0 , INLINEFORM1 , we have INLINEFORM2 . Thus, INLINEFORM3 should be 1 in the ideal condition. On the other hand, INLINEFORM4 , INLINEFORM5 should be 0. Thus, we have INLINEFORM6 , and INLINEFORM7 , we have INLINEFORM8 ."
],
[
"We proposed a method, shown in Algorithm SECREF22 , to merge question-contained query substructures to build new query structures. In the initialization step, it selects some query substructures of high scores as candidates, since the query substructure may directly be the appropriate query structure for the input question. In each iteration, the method merges each question-contained substructures with existing candidates, and the merged results of high scores are used as candidates in the next iteration. The final output is the union of all the results from at most INLINEFORM0 iterations. [!t] Query substructure merging textit Question INLINEFORM1 , freq. query substructures INLINEFORM2 INLINEFORM3 INLINEFORM4 (*[f] INLINEFORM5 is maximum iterations) INLINEFORM6 to INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 INLINEFORM12 ",
"When merging different query substructures, we allow them to share some vertices of the same kind (variable, entity, etc.) or edge labels, except the variables which represent aggregation results. Thus, the merged result of two query substructures is a set of query structures instead of one. Also, the following restrictions are used to filter the merged results:",
"The merged results should be connected;",
"The merged results have INLINEFORM0 triples;",
"The merged results have INLINEFORM0 aggregations;",
"An example for merging two query substructures is shown in Figure FIGREF26 ."
],
[
"In this section, we introduce the query generation datasets and state-of-the-art systems that we compare. We first show the end-to-end results of the query generation task, and then perform detailed analysis to show the effectiveness of each module. Question sets, source code and experimental results are available online."
],
[
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10). Both datasets are widely used in KBQA studies BIBREF10 , BIBREF6 , and have become benchmarks for some annual KBQA competitions. We did not employ the WebQuestions BIBREF11 dataset, since approximately 85% of its questions are simple. Also, we did not employ the ComplexQuestions BIBREF0 and ComplexWebQuestions BIBREF12 dataset, since the existing works on these datasets have not reported the formal query generation result, and it is difficult to separate the formal query generation component from the end-to-end KBQA systems in these works.",
"All the experiments were carried out on a machine with an Intel Xeon E3-1225 3.2GHz processor, 32 GB of RAM, and an NVIDIA GTX1080Ti GPU. For the embedding layer, we used random embedding. For each dataset, we performed 5-fold cross-validation with the train set (70%), development set (10%), and test set (20%). The threshold INLINEFORM0 for frequent query substructures is set to 30, the maximum iteration number INLINEFORM1 for merging is set to 2, INLINEFORM2 in Algorithm SECREF22 is set to INLINEFORM3 , the maximum triple number INLINEFORM4 for merged results is set to 5, and the maximum aggregation number INLINEFORM5 is set to 2. Other detailed statistics are shown in Table TABREF33 ."
],
[
"We compared SubQG with several existing approaches. SINA BIBREF13 and NLIWOD conduct query generation by predefined rules and existing templates. SQG BIBREF4 firstly generates candidate queries by finding valid walks containing all of entities and properties mentioned in questions, and then ranks them based on Tree-LSTM similarity. CompQA BIBREF2 is a KBQA system which achieved state-of-the-art performance on WebQuesions and ComplexQuestions over Freebase. We re-implemented its query generation component for DBpedia, which generates candidate queries by staged query generation, and ranks them using an encode-and-compare network.",
"The average F1-scores for the end-to-end query generation task are reported in Table TABREF35 . All these results are based on the gold standard entity/relation linking result as input. Our approach SubQG outperformed all the comparative approaches on both datasets. Furthermore, as the results shown in Table TABREF36 , it gained a more significant improvement on complex questions compared with CompQA.",
"https://github.com/dice-group/NLIWOD",
"Both SINA and NLIWOD did not employ a query ranking mechanism, i.e., their accuracy and coverage are limited by the rules and templates. Although both CompQA and SQG have a strong ability of generating candidate queries, they perform not quite well in query ranking. According to our observation, the main reason is that these approaches tried to learn entire representations for questions with different query structures (from simple to complex) using a single network, thus, they may suffer from the lack of training data, especially for the questions with rarely appeared structures. As a contrast, our approach leveraged multiple networks to learn predictors for different query substructures, and ranked query structures using combinational function, which gained a better performance.",
"The results on QALD-5 dataset is not as high as the result on LC-QuAD. This is because QALD-5 contains 11% of very difficult questions, requiring complex filtering conditions such as Regex and numerical comparison. These questions are currently beyond our approach's ability. Also, the size of training data is significant smaller."
],
[
"We compared the following settings of SubQG:",
"Rank w/o substructures. We replaced the query substructure prediction and query structure ranking module, by choosing an existing query structure in the training data for the input question, using a BiLSTM multiple classification network.",
"Rank w/ substructures We removed the merging module described in Section SECREF22 . This setting assumes that the appropriate query structure for an input question exists in the training data.",
"Merge query substructures This setting ignored existing query structures in the training data, and only considered the merged results of query substructures.",
"As the results shown in Table TABREF39 , the full version of SubQG achieved the best results on both datasets. Rank w/o substructures gained a comparatively low performance, especially when there is inadequate training data (on QALD-5). Compared with Rank w/ substructures, SubQG gained a further improvement, which indicates that the merging method successfully handled questions with unseen query structures.",
"Table TABREF40 shows the accuracy of some alternative networks for query substructure prediction (Section SECREF13 ). By removing the attention mechanism (replaced by unweighted average), the accuracy declined approximately 3%. Adding additional part of speech tag sequence of the input question gained no significant improvement. We also tried to replace the attention based BiLSTM with the network in BIBREF14 , which encodes questions with a convolutional layer followed by a max pooling layer. This approach did not perform well since it cannot capture long-term dependencies.",
"We simulated the real KBQA environment by considering noisy entity/relation linking results. We firstly mixed the correct linking result for each mention with the top-5 candidates generated from EARL BIBREF6 , which is a joint entity/relation linking system with state-of-the-art performance on LC-QuAD. The result is shown in the second row of Table TABREF42 . Although the precision for first output declined 11.4%, in 85% cases we still can generate correct answer in top-5. This is because SubQG ranked query structures first and considered linking results in the last step. Many error linking results can be filtered out by the empty query check or domain/range check.",
"We also test the performance of our approach only using the EARL linking results. The performance dropped dramatically in comparison to the first two rows. The main reason is that, for 82.8% of the questions, EARL provided partially correct results. If we consider the remaining questions, our system again have 73.2% and 84.8% of correctly-generated queries in top-1 and top-5 output, respectively.",
"We tested the performance of SubQG with different sizes of training data. The results on LC-QuAD dataset are shown in Figure FIGREF44 . With more training data, our query substructure based approaches obtained stable improvements on both precision and recall. Although the merging module impaired the overall precision a little bit, it shows a bigger improvement on recall, especially when there is very few training data. Generally speaking, equipped with the merging module, our substructure based query generation approach showed the best performance.",
"We analyzed 100 randomly sampled questions that SubQG did not return correct answers. The major causes of errors are summarized as follows:",
"Query structure errors (71%) occurred due to multiple reasons. Firstly, 21% of error cases have entity mentions that are not correctly detected before query substructure prediction, which highly influenced the prediction result. Secondly, in 39% of the cases a part of substructure predictors provided wrong prediction, which led to wrong structure ranking results. Finally, in the remaining 11% of the cases the correct query structure did not appear in the training data, and they cannot be generated by merging substructures.",
"Grounding errors (29%) occurred when SubQG generated wrong queries with correct query structures. For example, for the question “Was Kevin Rudd the prime minister of Julia Gillard\", SubQG cannot distinguish INLINEFORM0 from INLINEFORM1 INLINEFORM2 , since both triples exist in DBpedia. We believe that extra training data are required for fixing this problem."
],
[
"Alongside with entity and relation linking, existing KBQA systems often leverage formal query generation for complex question answering BIBREF0 , BIBREF8 . Based on our investigation, the query generation approaches can be roughly divided into two kinds: template-based and semantic parsing-based.",
"Template-based approaches transform the input question into a formal query by employing pre-collected query templates. BIBREF1 ( BIBREF1 ) collect different natural language expressions for the same query intention from question-answer pairs. BIBREF3 ( BIBREF3 ) re-implement and evaluate the query generation module in NLIWOD, which selects an existing template by some simple features such as the number of entities and relations in the input question. Recently, several query decomposition methods are studied to enlarge the coverage of the templates. BIBREF5 ( BIBREF5 ) present a KBQA system named QUINT, which collects query templates for specific dependency structures from question-answer pairs. Furthermore, it rewrites the dependency parsing results for questions with conjunctions, and then performs sub-question answering and answer stitching. BIBREF15 ( BIBREF15 ) decompose questions by using a huge number of triple-level templates extracted by distant supervision. Compared with these approaches, our approach predicts all kinds of query substructures (usually 1 to 4 triples) contained in the question, making full use of the training data. Also, our merging method can handle questions with unseen query structures, having a larger coverage and a more stable performance.",
"Semantic parsing-based approaches translate questions into formal queries using bottom up parsing BIBREF11 or staged query graph generation BIBREF14 . gAnswer BIBREF10 , BIBREF16 builds up semantic query graph for question analysis and utilize subgraph matching for disambiguation. Recent studies combine parsing based approaches with neural networks, to enhance the ability for structure disambiguation. BIBREF0 ( BIBREF0 ), BIBREF2 ( BIBREF2 ) and BIBREF4 ( BIBREF4 ) build query graphs by staged query generation, and follow an encode-and-compare framework to rank candidate queries with neural networks. These approaches try to learn entire representations for questions with different query structures by using a single network. Thus, they may suffer from the lack of training data, especially for questions with rarely appeared structures. By contrast, our approach utilizes multiple networks to learn predictors for different query substructures, which can gain a stable performance with limited training data. Also, our approach does not require manually-written rules, and performs stably with noisy linking results."
],
[
"In this paper, we introduced SubQG, a formal query generation approach based on frequent query substructures. SubQG firstly utilizes multiple neural networks to predict query substructures contained in the question, and then ranks existing query structures using a combinational function. Moreover, SubQG merges query substructures to build new query structures for questions without appropriate query structures in the training data. Our experiments showed that SubQG achieved superior results than the existing approaches, especially for complex questions.",
"In future work, we plan to add support for other complex questions whose queries require Union, Group By, or numerical comparison. Also, we are interested in mining natural language expressions for each query substructures, which may help current parsing approaches."
],
[
"This work is supported by the National Natural Science Foundation of China (Nos. 61772264 and 61872172). We would like to thank Yao Zhao for his help in preparing evaluation."
]
],
"section_name": [
"Introduction",
"Preliminaries",
"The Proposed Approach",
"Framework",
"Query Substructure Prediction",
"Query Structure Ranking",
"Query Substructure Merging",
"Experiments and Results",
"Experimental Setup",
"End-to-End Results",
"Detailed Analysis",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"036f5a72a8a8e7c5b45c685e5737165946645c95",
"9cffe24ec8511d04917f6a80bf9a4a05f7633cd5"
],
"answer": [
{
"evidence": [
"The average F1-scores for the end-to-end query generation task are reported in Table TABREF35 . All these results are based on the gold standard entity/relation linking result as input. Our approach SubQG outperformed all the comparative approaches on both datasets. Furthermore, as the results shown in Table TABREF36 , it gained a more significant improvement on complex questions compared with CompQA.",
"Table TABREF40 shows the accuracy of some alternative networks for query substructure prediction (Section SECREF13 ). By removing the attention mechanism (replaced by unweighted average), the accuracy declined approximately 3%. Adding additional part of speech tag sequence of the input question gained no significant improvement. We also tried to replace the attention based BiLSTM with the network in BIBREF14 , which encodes questions with a convolutional layer followed by a max pooling layer. This approach did not perform well since it cannot capture long-term dependencies."
],
"extractive_spans": [
"average F1-score",
"accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"The average F1-scores for the end-to-end query generation task are reported in Table TABREF35 .",
"Table TABREF40 shows the accuracy of some alternative networks for query substructure prediction (Section SECREF13 ). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The average F1-scores for the end-to-end query generation task are reported in Table TABREF35 . All these results are based on the gold standard entity/relation linking result as input. Our approach SubQG outperformed all the comparative approaches on both datasets. Furthermore, as the results shown in Table TABREF36 , it gained a more significant improvement on complex questions compared with CompQA."
],
"extractive_spans": [
"average F1-scores"
],
"free_form_answer": "",
"highlighted_evidence": [
"The average F1-scores for the end-to-end query generation task are reported in Table TABREF35 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"38059dab9803633cf34f7d31370bc73ee9b11f5d",
"bb28ba6e66f365a17d9e6e854840470055b80766"
],
"answer": [
{
"evidence": [
"In this paper, we introduced SubQG, a formal query generation approach based on frequent query substructures. SubQG firstly utilizes multiple neural networks to predict query substructures contained in the question, and then ranks existing query structures using a combinational function. Moreover, SubQG merges query substructures to build new query structures for questions without appropriate query structures in the training data. Our experiments showed that SubQG achieved superior results than the existing approaches, especially for complex questions."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we introduced SubQG, a formal query generation approach based on frequent query substructures. SubQG firstly utilizes multiple neural networks to predict query substructures contained in the question, and then ranks existing query structures using a combinational function."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"248726c821748dbed78f2da334a5c7495ddb78b6",
"8f8762bd89358aff0cb56b72f2c6b83af379322c"
],
"answer": [
{
"evidence": [
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10). Both datasets are widely used in KBQA studies BIBREF10 , BIBREF6 , and have become benchmarks for some annual KBQA competitions. We did not employ the WebQuestions BIBREF11 dataset, since approximately 85% of its questions are simple. Also, we did not employ the ComplexQuestions BIBREF0 and ComplexWebQuestions BIBREF12 dataset, since the existing works on these datasets have not reported the formal query generation result, and it is difficult to separate the formal query generation component from the end-to-end KBQA systems in these works."
],
"extractive_spans": [
"DBpedia (2016-04)",
"DBpedia (2015-10)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Knowledge-based question answering (KBQA) aims to answer natural language questions over knowledge bases (KBs) such as DBpedia and Freebase. Formal query generation is an important component in many KBQA systems BIBREF0 , BIBREF1 , BIBREF2 , especially for answering complex questions. Given entity and relation linking results, formal query generation aims to generate correct executable queries, e.g., SPARQL queries, for the input natural language questions. An example question and its formal query are shown in Figure FIGREF1 . Generally speaking, formal query generation is expected to include but not be limited to have the capabilities of (i) recognizing and paraphrasing different kinds of constraints, including triple-level constraints (e.g., “movies\" corresponds to a typing constraint for the target variable) and higher level constraints (e.g., subgraphs). For instance, “the same ... as\" represents a complex structure shown in the middle of Figure FIGREF1 ; (ii) recognizing and paraphrasing aggregations (e.g., “how many\" corresponds to Count); and (iii) organizing all the above to generate an executable query BIBREF3 , BIBREF4 .",
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10). Both datasets are widely used in KBQA studies BIBREF10 , BIBREF6 , and have become benchmarks for some annual KBQA competitions. We did not employ the WebQuestions BIBREF11 dataset, since approximately 85% of its questions are simple. Also, we did not employ the ComplexQuestions BIBREF0 and ComplexWebQuestions BIBREF12 dataset, since the existing works on these datasets have not reported the formal query generation result, and it is difficult to separate the formal query generation component from the end-to-end KBQA systems in these works."
],
"extractive_spans": [
"DBpedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"Knowledge-based question answering (KBQA) aims to answer natural language questions over knowledge bases (KBs) such as DBpedia and Freebase. ",
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3cd610ed85fc08c1d77c7daccd9d172b7933c8f5",
"e4846d21cc5fb06156e54cd1ba3978873161df43"
],
"answer": [
{
"evidence": [
"We simulated the real KBQA environment by considering noisy entity/relation linking results. We firstly mixed the correct linking result for each mention with the top-5 candidates generated from EARL BIBREF6 , which is a joint entity/relation linking system with state-of-the-art performance on LC-QuAD. The result is shown in the second row of Table TABREF42 . Although the precision for first output declined 11.4%, in 85% cases we still can generate correct answer in top-5. This is because SubQG ranked query structures first and considered linking results in the last step. Many error linking results can be filtered out by the empty query check or domain/range check."
],
"extractive_spans": [],
"free_form_answer": "by filtering errors in noisy entity linking by the empty query check or domain/range check in query structure ranking",
"highlighted_evidence": [
"We simulated the real KBQA environment by considering noisy entity/relation linking results. We firstly mixed the correct linking result for each mention with the top-5 candidates generated from EARL BIBREF6 , which is a joint entity/relation linking system with state-of-the-art performance on LC-QuAD. The result is shown in the second row of Table TABREF42 . Although the precision for first output declined 11.4%, in 85% cases we still can generate correct answer in top-5. This is because SubQG ranked query structures first and considered linking results in the last step. Many error linking results can be filtered out by the empty query check or domain/range check."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We simulated the real KBQA environment by considering noisy entity/relation linking results. We firstly mixed the correct linking result for each mention with the top-5 candidates generated from EARL BIBREF6 , which is a joint entity/relation linking system with state-of-the-art performance on LC-QuAD. The result is shown in the second row of Table TABREF42 . Although the precision for first output declined 11.4%, in 85% cases we still can generate correct answer in top-5. This is because SubQG ranked query structures first and considered linking results in the last step. Many error linking results can be filtered out by the empty query check or domain/range check."
],
"extractive_spans": [
"ranked query structures first and considered linking results in the last step",
"empty query check or domain/range check"
],
"free_form_answer": "",
"highlighted_evidence": [
"This is because SubQG ranked query structures first and considered linking results in the last step. Many error linking results can be filtered out by the empty query check or domain/range check."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"649a1a7ce161004cd9342c45defa58dd358f1a3c",
"76c1e2eeca86abffe307e2030d728482696cedfb"
],
"answer": [
{
"evidence": [
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10). Both datasets are widely used in KBQA studies BIBREF10 , BIBREF6 , and have become benchmarks for some annual KBQA competitions. We did not employ the WebQuestions BIBREF11 dataset, since approximately 85% of its questions are simple. Also, we did not employ the ComplexQuestions BIBREF0 and ComplexWebQuestions BIBREF12 dataset, since the existing works on these datasets have not reported the formal query generation result, and it is difficult to separate the formal query generation component from the end-to-end KBQA systems in these works."
],
"extractive_spans": [
"LC-QuAD",
"QALD-5"
],
"free_form_answer": "",
"highlighted_evidence": [
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10). Both datasets are widely used in KBQA studies BIBREF10 , BIBREF6 , and have become benchmarks for some annual KBQA competitions. We did not employ the WebQuestions BIBREF11 dataset, since approximately 85% of its questions are simple. Also, we did not employ the ComplexQuestions BIBREF0 and ComplexWebQuestions BIBREF12 dataset, since the existing works on these datasets have not reported the formal query generation result, and it is difficult to separate the formal query generation component from the end-to-end KBQA systems in these works."
],
"extractive_spans": [
"(LC-QuAD) BIBREF8",
"(QALD-5) dataset BIBREF9"
],
"free_form_answer": "",
"highlighted_evidence": [
"We employed the same datasets as BIBREF3 ( BIBREF3 ) and BIBREF4 ( BIBREF4 ): (i) the large-scale complex question answering dataset (LC-QuAD) BIBREF8 , containing 3,253 questions with non-empty results on DBpedia (2016-04), and (ii) the fifth edition of question answering over linked data (QALD-5) dataset BIBREF9 , containing 311 questions with non-empty results on DBpedia (2015-10). Both datasets are widely used in KBQA studies BIBREF10 , BIBREF6 , and have become benchmarks for some annual KBQA competitions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What are their evaluation metrics?",
"Are their formal queries tree-structured?",
"What knowledge base do they rely on?",
"How do they recover from noisy entity linking?",
"What datasets do they evaluate on?"
],
"question_id": [
"17fd6deb9e10707f9d1b70165dedb045e1889aac",
"c4a3f270e942803dab9b40e5e871a2e8886ce444",
"1faccdc78bbd99320c160ac386012720a0552119",
"804466848f4fa1c552f0d971dce226cd18b9edda",
"8d683d2e1f46626ceab60ee4ab833b50b346c29e"
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668"
],
"search_query": [
"question answering",
"question answering",
"question answering",
"question answering",
"question answering"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An example for complex question and query",
"Figure 2: Illustration of a query, a query structure and query substructures",
"Figure 4: An example for online query generation",
"Figure 3: Framework of the proposed approach",
"Figure 5: Attention-based BiLSTM network",
"Figure 6: Merge results for two query substructures",
"Table 1: Datasets and implementation details",
"Table 2: Average F1-scores of query generation",
"Table 6: Average Precision@k scores of query generation on LC-QuAD with noisy linking",
"Table 5: Accuracy of query substructure prediction",
"Figure 7: Precision, recall and F1-score with varied proportions of training data"
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure4-1.png",
"3-Figure3-1.png",
"4-Figure5-1.png",
"5-Figure6-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table6-1.png",
"7-Table5-1.png",
"8-Figure7-1.png"
]
} | [
"How do they recover from noisy entity linking?"
] | [
[
"1908.11053-Detailed Analysis-6"
]
] | [
"by filtering errors in noisy entity linking by the empty query check or domain/range check in query structure ranking"
] | 225 |
1806.06571 | SubGram: Extending Skip-gram Word Representation with Substrings | Skip-gram (word2vec) is a recent method for creating vector representations of words ("distributed word representations") using a neural network. The representation gained popularity in various areas of natural language processing, because it seems to capture syntactic and semantic information about words without any explicit supervision in this respect. We propose SubGram, a refinement of the Skip-gram model to consider also the word structure during the training process, achieving large gains on the Skip-gram original test set. | {
"paragraphs": [
[
"Vector representations of words learned using neural networks (NN) have proven helpful in many algorithms for image annotation BIBREF0 or BIBREF1 , language modeling BIBREF2 , BIBREF3 and BIBREF4 or other natural language processing (NLP) tasks BIBREF5 or BIBREF6 .",
"Traditionally, every input word of an NN is stored in the “one-hot” representation, where the vector has only one element set to one and the rest of the vector are zeros. The size of the vector equals to the size of the vocabulary. The NN is trained to perform some prediction, e.g. to predict surrounding words given a word of interest. Instead of using this prediction capacity in some task, the practice is to extract the output of NN's hidden layer of each word (called distributed representation) and directly use this deterministic mapping INLINEFORM0 of word forms to the vectors of real numbers as the word representation.",
"The input one-hot representation of words has two weaknesses: the bloat of the size of the vector with more words in vocabulary and the inability to provide any explicit semantic or syntactic information to the NN.",
"The learned distributed representation of words relies on much shorter vectors (e.g. vocabularies containing millions words are represented in vectors of a few hundred elements) and semantic or syntactic information is often found to be implicitly present (“embedded”) in the vector space. For example, the Euclidean distance between two words in the vector space may be related to semantic or syntactic similarity between them."
],
[
"The authors of BIBREF7 created a model called Skip-gram, in which linear vector operations allow to find related words with surprisingly good results. For instance INLINEFORM0 gives a value close to INLINEFORM1 .",
"In this paper, we extend Skip-gram model with the internal word structure and show how it improves the performance on embedding morpho-syntactic information.",
"The Skip-gram model defined in BIBREF7 is trained to predict context words of the input word. Given a corpus INLINEFORM0 of words INLINEFORM1 and their context words INLINEFORM2 (i.e. individual words INLINEFORM3 appearing close the original word INLINEFORM4 ), it considers the conditional probabilities INLINEFORM5 . The training finds the parameters INLINEFORM6 of INLINEFORM7 to maximize the corpus probability: DISPLAYFORM0 ",
"The Skip-gram model is a classic NN, where activation functions are removed and hierarchical soft-max BIBREF8 is used instead of soft-max normalization. The input representation is one-hot so the activation function is not needed on hidden layer, there is nothing to be summed up. This way, the model is learned much faster than comparable non-linear NNs and lends itself to linear vector operations possibly useful for finding semantically or syntactically related words."
],
[
"In BIBREF9 was proposed to append part-of-speech (POS) tags to each word and train Skip-gram model on the new vocabulary. This avoided conflating, e.g. nouns and verbs, leading to a better performance, at the cost of (1) the reliance on POS tags and their accurate estimation and (2) the increased sparsity of the data due to the larger vocabulary.",
"The authors in BIBREF10 used character-level input to train language models using a complex setup of NNs of several types. Their model was able to assign meaning to out-of-vocabulary words based on the closest neighbor. One disadvantage of the model is its need to run the computation on a GPU for a long time.",
"The authors of BIBREF11 proposed an extension of Skip-gram model which uses character similarity of words to improve performance on syntactic and semantic tasks. They are using a set of similar words as additional features for the NN. Various similarity measures are tested: Levenshtein, longest common substring, morpheme and syllable similarity.",
"The authors of BIBREF12 added the information about word's root, affixes, syllables, synonyms, antonyms and POS tags to continuous bag-of-words model (CBOW) proposed by BIBREF7 and showed how these types of knowledge lead to better word embeddings. The CBOW model is a simpler model with usually worse performance than Skip-gram."
],
[
"We propose a substring-oriented extension of Skip-gram model which induces vector embeddings from character-level structure of individual words. This approach gives the NN more information about the examined word with no drawbacks in data sparsity or reliance on explicit linguistic annotation.",
"We append the characters and $ to the word to indicate its beginning and end. In order to generate the vector of substrings, we take all character bigrams, trigrams etc. up to the length of the word. This way, even the word itself is represented as one of the substrings. For the NN, each input word is then represented as a binary vector indicating which substrings appear in the word.",
"The original Skip-gram model BIBREF7 uses one-hot representation of a word in vocabulary as the input vector. This representation makes training fast because no summation or normalization is needed. The weights INLINEFORM0 of the input word INLINEFORM1 can be directly used as the output of hidden layer INLINEFORM2 (and as the distributed word representation): INLINEFORM3 ",
"In our approach, we provide the network with a binary vector representing all substrings of the word. To compute the input of hidden layer we decided to use mean value as it is computationally simpler than sigmoid: DISPLAYFORM0 ",
"where INLINEFORM0 is the number of substrings of the word INLINEFORM1 ."
],
[
"We train our NN on words and their contexts extracted from the English wikipedia dump from May 2015. We have cleaned the data by replacing all numbers with 0 and removing special characters except those usually present in the English text like dots, brackets, apostrophes etc. For the final training data we have randomly selected only 2.5M segments (mostly sentences). It consist of 96M running words with the vocabulary size of 1.09M distinct word forms.",
"We consider only the 141K most frequent word forms to simplify the training. The remaining word forms fall out of vocabulary (OOV), so the original Skip-gram cannot provide them with any vector representation. Our SubGram relies on known substrings and always provides at least some approximation.",
"We test our model on the original test set BIBREF7 . The test set consists of 19544 “questions”, of which 8869 are called “semantic” and 10675 are called “syntactic” and further divided into 14 types, see Table TABREF4 . Each question contains two pairs of words ( INLINEFORM0 ) and captures relations like “What is to `woman' ( INLINEFORM1 ) as `king' ( INLINEFORM2 ) is to `man' ( INLINEFORM3 )?”, together with the expected answer `queen' ( INLINEFORM4 ). The model is evaluated by finding the word whose representation is the nearest (cosine similarity) to the vector INLINEFORM5 . If the nearest neighbor is INLINEFORM6 , we consider the question answered correctly.",
"In this work, we use Mikolov's test set which is used in many papers. After a closer examination we came to the conclusion, that it does not test what the broad terms “syntactic” and “semantic relations” suggest. “Semantics” is covered by questions of only 3 types: predict a city based on a country or state, currency name from the country and the feminine variant of nouns denoting family relations. The authors of BIBREF13 showed, that many other semantic relationships could be tested, e.g. walk-run, dog-puppy, bark-dog, cook-eat and others.",
"“Syntactic” questions cover a wider range of relations at the boundary of morphology and syntax. The problem is that all questions of a given type are constructed from just a few dozens of word pairs, comparing pairs with each other. Overall, there are 313 distinct pairs throughout the whole syntactic test set of 10675 questions, which means only around 35 different pairs per question set. Moreover, of the 313 pairs, 286 pairs are regularly formed (e.g. by adding the suffix `ly' to change an adjective into the corresponding adverb). Though it has to be mentioned that original model could not use this kind of information.",
"We find such a small test set unreliable to answer the question whether the embedding captures semantic and syntactic properties of words."
],
[
"Although the original test set has been used to compare results in several papers, no-one tried to process it with some baseline approach. Therefore, we created a very simple set of rules for comparison on the syntactic part of the test set. The rules cover only the most frequent grammatical phenomenona.",
"[noitemsep]",
"adjective-to-adverb: Add ly at the end of the adjective.",
"opposite: Add un at the beginning of positive form.",
"comparative: If the adjective ends with y, replace it with ier. If it ends with e, add r. Otherwise add er at the end.",
"superlative: If the adjective ends with y, replace it with iest. If it ends with e, add st. Otherwise add est at the end.",
"present-participle: If the verb ends with e, replace it with ing, otherwise add ing at the end.",
"nationality-adjective: Add n at the end, e.g. Russia INLINEFORM0 Russian.",
"past-tense: Remove ing and add ed at the end of the verb.",
"plural: Add s at the end of the word.",
"plural-verbs: If the word ends with a vowel, add es at the end, else add s."
],
[
"We have decided to create more general test set which would consider more than 35 pairs per question set. Since we are interested in morphosyntactic relations, we extended only the questions of the “syntactic” type with exception of nationality adjectives which is already covered completely in original test set.",
"We constructed the pairs more or less manually, taking inspiration in the Czech side of the CzEng corpus BIBREF14 , where explicit morphological annotation allows to identify various pairs of Czech words (different grades of adjectives, words and their negations, etc.). The word-aligned English words often shared the same properties. Another sources of pairs were acquired from various webpages usually written for learners of English. For example for verb tense, we relied on a freely available list of English verbs and their morphological variations. We have included 100–1000 different pairs for each question set. The questions were constructed from the pairs similarly as by Mikolov: generating all possible pairs of pairs. This leads to millions of questions, so we randomly selected 1000 instances per question set, to keep the test set in the same order of magnitude. Additionally, we decided to extend set of questions on opposites to cover not only opposites of adjectives but also of nouns and verbs.",
"In order to test our extension of Skip-gram on out-of-vocabulary words, we created an additional subset of our test set with questions where at least one of INLINEFORM0 and INLINEFORM1 is not among the known word forms. Note that the last word INLINEFORM2 must be in vocabulary in order to check if the output vector is correct."
],
[
"We used a Python implementation of word2vec as the basis for our SubGram, which we have made freely available .",
"We limit the vocabulary, requiring each word form to appear at least 10 times in the corpus and each substring to appear at least 500 times in the corpus. This way, we get the 141K unique words mentioned above and 170K unique substrings (+141K words, as we downsample words separately).",
"Our word vectors have the size of 100. The size of the context window is 5.",
"The accuracy is computed as the number of correctly answered questions divided by the total number of questions in the set. Because the Skip-gram cannot answer questions containing OOV words, we also provide results with such questions excluded from the test set (scores in brackets).",
"Table TABREF18 and Table TABREF19 report the results. The first column shows the rule-based approach. The column “Released Skip-gram” shows results of the model released by Mikolov and was trained on a 100 billion word corpus from Google News and generates 300 dimensional vector representation. The third column shows Skip-gram model trained on our training data, the same data as used for the training of the SubGram. Last column shows the results obtained from our SubGram model.",
"Comparing Skip-gram and SubGram on the original test set (Table TABREF18 ), we see that our SubGram outperforms Skip-gram in several morpho-syntactic question sets but over all performs similarly (42.5% vs. 42.3%). On the other hand, it does not capture the tested semantic relations at all, getting a zero score on average.",
"When comparing models on our test set (Table TABREF19 ), we see that given the same training set, SubGram significantly outperforms Skip-gram model (22.4% vs. 9.7%). The performance of Skip-gram trained on the much larger dataset is higher (43.5%) and it would be interesting to see the SubGram model, if we could get access to such training data. Note however, that the Rule-based baseline is significantly better on both test sets.",
"The last column suggests that the performance of our model on OOV words is not very high, but it is still an improvement over flat zero of the Skip-gram model. The performance on OOVs is expected to be lower, since the model has no knowledge of exceptions and can only benefit from regularities in substrings."
],
[
"We are working on a better test set for word embeddings which would include many more relations over a larger vocabulary especially semantics relations. We want to extend the test set with Czech and perhaps other languages, to see what word embeddings can bring to languages morphologically richer than English.",
"As shown in the results, the rule based approach outperform NN approach on this type of task, therefore we would like to create a hybrid system which could use rules and part-of-speech tags. We will also include morphological tags in the model as proposed in BIBREF9 but without making the data sparse.",
"Finally, we plan to reimplement SubGram to scale up to larger training data."
],
[
"We described SubGram, an extension of the Skip-gram model that considers also substrings of input words. The learned embeddings then better capture almost all morpho-syntactic relations tested on test set which we extended from original described in BIBREF7 . This test set is released for the public use.",
"An useful feature of our model is the ability to generate vector embeddings even for unseen words. This could be exploited by NNs also in different tasks."
],
[
"This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 645452 (QT21), the grant GAUK 8502/2016, and SVV project number 260 333.",
"This work has been using language resources developed, stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071)."
]
],
"section_name": [
"Introduction",
"Skip-gram Model",
"Related Work",
"SubGram",
"Evaluation and Data Sets",
"Rule-Based Baseline Approach",
"Our Test Set",
"Experiments and Results",
"Future Work",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"037dcc306709aa8467042fa6f81373cddb751ac7",
"5f490249eaa219f2d4793fd25e9e3045cca3bbf3"
],
"answer": [
{
"evidence": [
"We train our NN on words and their contexts extracted from the English wikipedia dump from May 2015. We have cleaned the data by replacing all numbers with 0 and removing special characters except those usually present in the English text like dots, brackets, apostrophes etc. For the final training data we have randomly selected only 2.5M segments (mostly sentences). It consist of 96M running words with the vocabulary size of 1.09M distinct word forms.",
"We consider only the 141K most frequent word forms to simplify the training. The remaining word forms fall out of vocabulary (OOV), so the original Skip-gram cannot provide them with any vector representation. Our SubGram relies on known substrings and always provides at least some approximation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We train our NN on words and their contexts extracted from the English wikipedia dump from May 2015. We have cleaned the data by replacing all numbers with 0 and removing special characters except those usually present in the English text like dots, brackets, apostrophes etc. For the final training data we have randomly selected only 2.5M segments (mostly sentences). It consist of 96M running words with the vocabulary size of 1.09M distinct word forms.\n\nWe consider only the 141K most frequent word forms to simplify the training. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The authors of BIBREF7 created a model called Skip-gram, in which linear vector operations allow to find related words with surprisingly good results. For instance INLINEFORM0 gives a value close to INLINEFORM1 .",
"We train our NN on words and their contexts extracted from the English wikipedia dump from May 2015. We have cleaned the data by replacing all numbers with 0 and removing special characters except those usually present in the English text like dots, brackets, apostrophes etc. For the final training data we have randomly selected only 2.5M segments (mostly sentences). It consist of 96M running words with the vocabulary size of 1.09M distinct word forms.",
"Table TABREF18 and Table TABREF19 report the results. The first column shows the rule-based approach. The column “Released Skip-gram” shows results of the model released by Mikolov and was trained on a 100 billion word corpus from Google News and generates 300 dimensional vector representation. The third column shows Skip-gram model trained on our training data, the same data as used for the training of the SubGram. Last column shows the results obtained from our SubGram model."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The authors of BIBREF7 created a model called Skip-gram, in which linear vector operations allow to find related words with surprisingly good results.",
"We train our NN on words and their contexts extracted from the English wikipedia dump from May 2015.",
" The column “Released Skip-gram” shows results of the model released by Mikolov and was trained on a 100 billion word corpus from Google News and generates 300 dimensional vector representation."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"571b557309aac2f23ac00b81b8f935cf2a4fb437",
"5874a58c69caa8f05d5a2531160513a3b94878ef"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2. Results on original test set questions. The values in brackets are based on questions without any OOVs."
],
"extractive_spans": [],
"free_form_answer": "between 21-57% in several morpho-syntactic questions",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Results on original test set questions. The values in brackets are based on questions without any OOVs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We test our model on the original test set BIBREF7 . The test set consists of 19544 “questions”, of which 8869 are called “semantic” and 10675 are called “syntactic” and further divided into 14 types, see Table TABREF4 . Each question contains two pairs of words ( INLINEFORM0 ) and captures relations like “What is to `woman' ( INLINEFORM1 ) as `king' ( INLINEFORM2 ) is to `man' ( INLINEFORM3 )?”, together with the expected answer `queen' ( INLINEFORM4 ). The model is evaluated by finding the word whose representation is the nearest (cosine similarity) to the vector INLINEFORM5 . If the nearest neighbor is INLINEFORM6 , we consider the question answered correctly.",
"We have decided to create more general test set which would consider more than 35 pairs per question set. Since we are interested in morphosyntactic relations, we extended only the questions of the “syntactic” type with exception of nationality adjectives which is already covered completely in original test set.",
"The accuracy is computed as the number of correctly answered questions divided by the total number of questions in the set. Because the Skip-gram cannot answer questions containing OOV words, we also provide results with such questions excluded from the test set (scores in brackets).",
"Table TABREF18 and Table TABREF19 report the results. The first column shows the rule-based approach. The column “Released Skip-gram” shows results of the model released by Mikolov and was trained on a 100 billion word corpus from Google News and generates 300 dimensional vector representation. The third column shows Skip-gram model trained on our training data, the same data as used for the training of the SubGram. Last column shows the results obtained from our SubGram model.",
"We consider only the 141K most frequent word forms to simplify the training. The remaining word forms fall out of vocabulary (OOV), so the original Skip-gram cannot provide them with any vector representation. Our SubGram relies on known substrings and always provides at least some approximation.",
"We propose a substring-oriented extension of Skip-gram model which induces vector embeddings from character-level structure of individual words. This approach gives the NN more information about the examined word with no drawbacks in data sparsity or reliance on explicit linguistic annotation.",
"Comparing Skip-gram and SubGram on the original test set (Table TABREF18 ), we see that our SubGram outperforms Skip-gram in several morpho-syntactic question sets but over all performs similarly (42.5% vs. 42.3%). On the other hand, it does not capture the tested semantic relations at all, getting a zero score on average."
],
"extractive_spans": [],
"free_form_answer": "Only 0.2% accuracy gain in morpho-sintactic questions in original test set, and 12.7% accuracy gain on their test set",
"highlighted_evidence": [
"We test our model on the original test set BIBREF7 . The test set consists of 19544 “questions”, of which 8869 are called “semantic” and 10675 are called “syntactic” and further divided into 14 types, see Table TABREF4 .",
"We have decided to create more general test set which would consider more than 35 pairs per question set. Since we are interested in morphosyntactic relations, we extended only the questions of the “syntactic” type with exception of nationality adjectives which is already covered completely in original test set.",
"The accuracy is computed as the number of correctly answered questions divided by the total number of questions in the set. ",
"Table TABREF18 and Table TABREF19 report the results.",
"The first column shows the rule-based approach. The column “Released Skip-gram” shows results of the model released by Mikolov and was trained on a 100 billion word corpus from Google News and generates 300 dimensional vector representation. The third column shows Skip-gram model trained on our training data, the same data as used for the training of the SubGram. Last column shows the results obtained from our SubGram model.",
"Our SubGram relies on known substrings and always provides at least some approximation.",
"We propose a substring-oriented extension of Skip-gram model which induces vector embeddings from character-level structure of individual words",
"Comparing Skip-gram and SubGram on the original test set (Table TABREF18 ), we see that our SubGram outperforms Skip-gram in several morpho-syntactic question sets but over all performs similarly (42.5% vs. 42.3%). On the other hand, it does not capture the tested semantic relations at all, getting a zero score on average."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"Did they use the same dataset as Skip-gram to train?",
"How much were the gains they obtained?"
],
"question_id": [
"5ae005917efc17a505ba1ba5e996c4266d6c74b6",
"72c04eb3fc323c720f7f8da75c70f09a35abf3e6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Table 1. Mikolov’s test set question types, the upper part are “semantic” questions, the lower part are “syntactic”.",
"Table 2. Results on original test set questions. The values in brackets are based on questions without any OOVs.",
"Table 3. Results on our test set questions."
],
"file": [
"4-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png"
]
} | [
"How much were the gains they obtained?"
] | [
[
"1806.06571-6-Table2-1.png",
"1806.06571-Experiments and Results-4",
"1806.06571-Evaluation and Data Sets-2",
"1806.06571-Our Test Set-0",
"1806.06571-SubGram-0",
"1806.06571-Experiments and Results-3",
"1806.06571-Experiments and Results-5",
"1806.06571-Evaluation and Data Sets-1"
]
] | [
"Only 0.2% accuracy gain in morpho-sintactic questions in original test set, and 12.7% accuracy gain on their test set"
] | 226 |
1906.00424 | Plain English Summarization of Contracts | Unilateral contracts, such as terms of service, play a substantial role in modern digital life. However, few users read these documents before accepting the terms within, as they are too long and the language too complicated. We propose the task of summarizing such legal documents in plain English, which would enable users to have a better understanding of the terms they are accepting. We propose an initial dataset of legal text snippets paired with summaries written in plain English. We verify the quality of these summaries manually and show that they involve heavy abstraction, compression, and simplification. Initial experiments show that unsupervised extractive summarization methods do not perform well on this task due to the level of abstraction and style differences. We conclude with a call for resource and technique development for simplification and style transfer for legal language. | {
"paragraphs": [
[
"Although internet users accept unilateral contracts such as terms of service on a regular basis, it is well known that these users rarely read them. Nonetheless, these are binding contractual agreements. A recent study suggests that up to 98% of users do not fully read the terms of service before accepting them BIBREF0 . Additionally, they find that two of the top three factors users reported for not reading these documents were that they are perceived as too long (`information overload') and too complicated (`difficult to understand'). This can be seen in Table TABREF3 , where a section of the terms of service for a popular phone app includes a 78-word paragraph that can be distilled down to a 19-word summary.",
"The European Union's BIBREF1 , the United States' BIBREF2 , and New York State's BIBREF3 show that many levels of government have recognized the need to make legal information more accessible to non-legal communities. Additionally, due to recent social movements demanding accessible and transparent policies on the use of personal data on the internet BIBREF4 , multiple online communities have formed that are dedicated to manually annotating various unilateral contracts.",
"We propose the task of the automatic summarization of legal documents in plain English for a non-legal audience. We hope that such a technological advancement would enable a greater number of people to enter into everyday contracts with a better understanding of what they are agreeing to. Automatic summarization is often used to reduce information overload, especially in the news domain BIBREF5 . Summarization has been largely missing in the legal genre, with notable exceptions of judicial judgments BIBREF6 , BIBREF7 and case reports BIBREF8 , as well as information extraction on patents BIBREF9 , BIBREF10 . While some companies have conducted proprietary research in the summarization of contracts, this information sits behind a large pay-wall and is geared toward law professionals rather than the general public.",
"In an attempt to motivate advancement in this area, we have collected 446 sets of contract sections and corresponding reference summaries which can be used as a test set for such a task. We have compiled these sets from two websites dedicated to explaining complicated legal documents in plain English.",
"Rather than attempt to summarize an entire document, these sources summarize each document at the section level. In this way, the reader can reference the more detailed text if need be. The summaries in this dataset are reviewed for quality by the first author, who has 3 years of professional contract drafting experience.",
"The dataset we propose contains 446 sets of parallel text. We show the level of abstraction through the number of novel words in the reference summaries, which is significantly higher than the abstractive single-document summaries created for the shared tasks of the Document Understanding Conference (DUC) in 2002 BIBREF11 , a standard dataset used for single document news summarization. Additionally, we utilize several common readability metrics to show that there is an average of a 6 year reading level difference between the original documents and the reference summaries in our legal dataset.",
"In initial experimentation using this dataset, we employ popular unsupervised extractive summarization models such as TextRank BIBREF12 and Greedy KL BIBREF13 , as well as lead baselines. We show that such methods do not perform well on this dataset when compared to the same methods on DUC 2002. These results highlight the fact that this is a very challenging task. As there is not currently a dataset in this domain large enough for supervised methods, we suggest the use of methods developed for simplification and/or style transfer.",
"In this paper, we begin by discussing how this task relates to the current state of text summarization and similar tasks in Section SECREF2 . We then introduce the novel dataset and provide details on the level of abstraction, compression, and readability in Section SECREF3 . Next, we provide results and analysis on the performance of extractive summarization baselines on our data in Section SECREF5 . Finally, we discuss the potential for unsupervised systems in this genre in Section SECREF6 ."
],
[
"Given a document, the goal of single document summarization is to produce a shortened summary of the document that captures its main semantic content BIBREF5 . Existing research extends over several genres, including news BIBREF11 , BIBREF14 , BIBREF15 , scientific writing BIBREF16 , BIBREF17 , BIBREF18 , legal case reports BIBREF8 , etc. A critical factor in successful summarization research is the availability of a dataset with parallel document/human-summary pairs for system evaluation. However, no such publicly available resource for summarization of contracts exists to date. We present the first dataset in this genre. Note that unlike other genres where human summaries paired with original documents can be found at scale, e.g., the CNN/DailyMail dataset BIBREF14 , resources of this kind are yet to be curated/created for contracts. As traditional supervised summarization systems require these types of large datasets, the resources released here are intended for evaluation, rather than training. Additionally, as a first step, we restrict our initial experiments to unsupervised baselines which do not require training on large datasets.",
"The dataset we present summarizes contracts in plain English. While there is no precise definition of plain English, the general philosophy is to make a text readily accessible for as many English speakers as possible. BIBREF19 , BIBREF20 . Guidelines for plain English often suggest a preference for words with Saxon etymologies rather than a Latin/Romance etymologies, the use of short words, sentences, and paragraphs, etc. BIBREF20 , BIBREF21 . In this respect, the proposed task involves some level of text simplification, as we will discuss in Section SECREF16 . However, existing resources for text simplification target literacy/reading levels BIBREF22 or learners of English as a second language BIBREF23 . Additionally, these models are trained using Wikipedia or news articles, which are quite different from legal documents. These systems are trained without access to sentence-aligned parallel corpora; they only require semantically similar texts BIBREF24 , BIBREF25 , BIBREF26 . To the best of our knowledge, however, there is no existing dataset to facilitate the transfer of legal language to plain English."
],
[
"This section introduces a dataset compiled from two websites dedicated to explaining unilateral contracts in plain English: TL;DRLegal and TOS;DR. These websites clarify language within legal documents by providing summaries for specific sections of the original documents. The data was collected using Scrapy and a JSON interface provided by each website's API. Summaries are submitted and maintained by members of the website community; neither website requires community members to be law professionals."
],
[
"TL;DRLegal focuses mostly on software licenses, however, we only scraped documents related to specific companies rather than generic licenses (i.e. Creative Commons, etc). The scraped data consists of 84 sets sourced from 9 documents: Pokemon GO Terms of Service, TLDRLegal Terms of Service, Minecraft End User Licence Agreement, YouTube Terms of Service, Android SDK License Agreement (June 2014), Google Play Game Services (May 15th, 2013), Facebook Terms of Service (Statement of Rights and Responsibilities), Dropbox Terms of Service, and Apple Website Terms of Service.",
"Each set consists of a portion from the original agreement text and a summary written in plain English. Examples of the original text and the summary are shown in Table TABREF10 ."
],
[
"TOS;DR tends to focus on topics related to user data and privacy. We scraped 421 sets of parallel text sourced from 166 documents by 122 companies. Each set consists of a portion of an agreement text (e.g., Terms of Use, Privacy Policy, Terms of Service) and 1-3 human-written summaries.",
"While the multiple references can be useful for system development and evaluation, the qualities of these summaries varied greatly. Therefore, each text was examined by the first author, who has three years of professional experience in contract drafting for a software company. A total of 361 sets had at least one quality summary in the set. For each, the annotator selected the most informative summary to be used in this paper.",
"Of the 361 accepted summaries, more than two-thirds of them (152) are `templatic' summaries. A summary deemed templatic if it could be found in more than one summary set, either word-for-word or with just the service name changed. However, of the 152 templatic summaries which were selected as the best of their set, there were 111 unique summaries. This indicates that the templatic summaries which were selected for the final dataset are relatively unique.",
"A total of 369 summaries were outright rejected for a variety of reasons, including summaries that: were a repetition of another summary for the same source snippet (291), were an exact quote of the original text (63), included opinionated language that could not be inferred from the original text (24), or only described the topic of the quote but not the content (20). We also rejected any summaries that are longer than the original texts they summarize. Annotated examples from TOS;DR can be found in Table TABREF12 ."
],
[
"To understand the level of abstraction of the proposed dataset, we first calculate the number of n-grams that appear only in the reference summaries and not in the original texts they summarize BIBREF14 , BIBREF27 . As shown in Figure FIGREF14 , 41.4% of words in the reference summaries did not appear in the original text. Additionally, 78.5%, 88.4%, and 92.3% of 2-, 3-, and 4-grams in the reference summaries did not appear in the original text. When compared to a standard abstractive news dataset also shown in the graph (DUC 2002), the legal dataset is significantly more abstractive.",
"Furthermore, as shown in Figure FIGREF15 , the dataset is very compressive, with a mean compression rate of 0.31 (std 0.23). The original texts have a mean of 3.6 (std 3.8) sentences per document and a mean of 105.6 (std 147.8) words per document. The reference summaries have a mean of 1.2 (std 0.6) sentences per document, and a mean of 17.2 (std 11.8) words per document."
],
[
"To verify that the summaries more accessible to a wider audience, we also compare the readability of the reference summaries and the original texts.",
"We make a comparison between the original contract sections and respective summaries using four common readability metrics. All readability metrics were implemented using Wim Muskee's readability calculator library for Python. These measurements included:",
"Flesch-Kincaid formula (F-K): the weighted sum of the number of words in a sentence and the number of syllables per word BIBREF28 ,",
"Coleman-Liau index (CL): the weighted sum of the number of letters per 100 words and the average number of sentences per 100 words BIBREF29 ,",
"SMOG: the weighted square root of the number of polysyllable words per sentence BIBREF30 , and",
"Automated readability index (ARI): the weighted sum of the number of characters per word and number of words per sentence BIBREF31 .",
"Though these metrics were originally formulated based on US grade levels, we have adjusted the numbers to provide the equivalent age correlated with the respective US grade level.",
"We ran each measurement on the reference summaries and original texts. As shown in Table TABREF23 , the reference summaries scored lower than the original texts for each test by an average of 6 years.",
"We also seek to single out lexical difficulty, as legal text often contains vocabulary that is difficult for non-professionals. To do this, we obtain the top 50 words INLINEFORM0 most associated with summaries and top 50 words INLINEFORM1 most associated with the original snippets (described below) and consider the differences of ARI and F-K measures. We chose these two measures because they are a weighted sum of a word and sentential properties; as sentential information is kept the same (50 1-word “sentences”), the differences will reflect the change in readability of the words most associated with plain English summaries/original texts.",
"To collect INLINEFORM0 and INLINEFORM1 , we calculate the log odds ratio for each word, a measure used in prior work comparing summary text and original documents BIBREF32 . The log odds ratio compares the probability of a word INLINEFORM2 occurring in the set of all summaries INLINEFORM3 vs. original texts INLINEFORM4 : INLINEFORM5 ",
"The list of words with the highest log odds ratios for the reference summaries ( INLINEFORM0 ) and original texts ( INLINEFORM1 ) can be found in Table TABREF25 .",
"We calculate the differences (in years) of ARI and F-K scores between INLINEFORM0 and INLINEFORM1 : INLINEFORM2 INLINEFORM3 ",
"Hence, there is a INLINEFORM0 6-year reading level distinction between the two sets of words, an indication that lexical difficulty is paramount in legal text."
],
[
"We present our legal dataset as a test set for contracts summarization. In this section, we report baseline performances of unsupervised, extractive methods as most recent supervised abstractive summarization methods, e.g., BIBREF33 , BIBREF14 , would not have enough training data in this domain. We chose to look at the following common baselines:"
],
[
"Our preliminary experiments and analysis show that summarizing legal contracts in plain English is challenging, and point to the potential usefulness of a simplification or style transfer system in the summarization pipeline. Yet this is challenging. First, there may be a substantial domain gap between legal documents and texts that existing simplification systems are trained on (e.g., Wikipedia, news). Second, popular supervised approaches such as treating sentence simplification as monolingual machine translation BIBREF35 , BIBREF23 , BIBREF36 , BIBREF37 , BIBREF38 would be difficult to apply due to the lack of sentence-aligned parallel corpora. Possible directions include unsupervised lexical simplification utilizing distributed representations of words BIBREF39 , BIBREF40 , unsupervised sentence simplification using rich semantic structure BIBREF41 , or unsupervised style transfer techniques BIBREF24 , BIBREF25 , BIBREF26 . However, there is not currently a dataset in this domain large enough for unsupervised methods, nor corpora unaligned but comparable in semantics across legal and plain English, which we see as a call for future research."
],
[
"In this paper, we propose the task of summarizing legal documents in plain English and present an initial evaluation dataset for this task. We gather our dataset from online sources dedicated to explaining sections of contracts in plain English and manually verify the quality of the summaries. We show that our dataset is highly abstractive and that the summaries are much simpler to read. This task is challenging, as popular unsupervised extractive summarization methods do not perform well on this dataset and, as discussed in section SECREF6 , current methods that address the change in register are mostly supervised as well. We call for the development of resources for unsupervised simplification and style transfer in this domain."
],
[
"We would like to personally thank Katrin Erk for her help in the conceptualization of this project. Additional thanks to May Helena Plumb, Barea Sinno, and David Beavers for their aid in the revision process. We are grateful for the anonymous reviewers and for the TLDRLegal and TOS;DR communities and their pursuit of transparency."
]
],
"section_name": [
"Introduction",
"Related work",
"Data",
"TL;DRLegal",
"TOS;DR",
"Levels of abstraction and compression",
"Readability",
"Summarization baselines",
"Discussion",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0643717144e5ba833646bdad1964d569bf379e03",
"27ead4def34a07abec53eed367068456cf8c6f09"
],
"answer": [
{
"evidence": [
"We present our legal dataset as a test set for contracts summarization. In this section, we report baseline performances of unsupervised, extractive methods as most recent supervised abstractive summarization methods, e.g., BIBREF33 , BIBREF14 , would not have enough training data in this domain. We chose to look at the following common baselines:"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (baseline list) TextRank, KLSum, Lead-1, Lead-K and Random-K",
"highlighted_evidence": [
"We chose to look at the following common baselines:"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 6: Performance for each dataset on the baselines was measured using Rouge-1, Rouge-2, and Rouge-L."
],
"extractive_spans": [],
"free_form_answer": "TextRank, KLSum, Lead-1, Lead-K, Random-K",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Performance for each dataset on the baselines was measured using Rouge-1, Rouge-2, and Rouge-L."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"039a367055da929c9a2c47f8f3f9ea623ebb4952",
"26776ef2c4a891426205523917ec88ee88569cfa"
],
"answer": [
{
"evidence": [
"The dataset we propose contains 446 sets of parallel text. We show the level of abstraction through the number of novel words in the reference summaries, which is significantly higher than the abstractive single-document summaries created for the shared tasks of the Document Understanding Conference (DUC) in 2002 BIBREF11 , a standard dataset used for single document news summarization. Additionally, we utilize several common readability metrics to show that there is an average of a 6 year reading level difference between the original documents and the reference summaries in our legal dataset."
],
"extractive_spans": [
"446"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset we propose contains 446 sets of parallel text. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The dataset we propose contains 446 sets of parallel text. We show the level of abstraction through the number of novel words in the reference summaries, which is significantly higher than the abstractive single-document summaries created for the shared tasks of the Document Understanding Conference (DUC) in 2002 BIBREF11 , a standard dataset used for single document news summarization. Additionally, we utilize several common readability metrics to show that there is an average of a 6 year reading level difference between the original documents and the reference summaries in our legal dataset."
],
"extractive_spans": [
"446 sets of parallel text"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset we propose contains 446 sets of parallel text."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is the extractive technique used for summarization?",
"How big is the dataset?"
],
"question_id": [
"0715d510359eb4c851cf063c8b3a0c61b8a8edc0",
"4e106b03cc2f54373e73d5922e97f7e5e9bf03e4"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Unique n-grams in the reference summary, contrasting our legal dataset with DUC 2002 single document summarization data.",
"Table 4: Average readability scores for the reference summaries (Ref) and the original texts (Orig). Descriptions of each measurement can be found in Section 4.2.",
"Figure 2: Ratio of words in the reference summary to words in the original text. The ratio was calculated by dividing the number of words in the reference summary by the number of words in the original text.",
"Table 5: The 50 words most associated with the original text or reference summary, as measured by the log odds ratio.",
"Table 6: Performance for each dataset on the baselines was measured using Rouge-1, Rouge-2, and Rouge-L.",
"Table 7: Examples of reference summaries and results from various extractive summarization techniques. The text shown here has been pre-processed. To conserve space, original texts were excluded from most examples."
],
"file": [
"3-Figure1-1.png",
"5-Table4-1.png",
"5-Figure2-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"8-Table7-1.png"
]
} | [
"What is the extractive technique used for summarization?"
] | [
[
"1906.00424-7-Table6-1.png",
"1906.00424-Summarization baselines-0"
]
] | [
"TextRank, KLSum, Lead-1, Lead-K, Random-K"
] | 227 |
1802.06053 | Bayesian Models for Unit Discovery on a Very Low Resource Language | Developing speech technologies for low-resource languages has become a very active research field over the last decade. Among others, Bayesian models have shown some promising results on artificial examples but still lack of in situ experiments. Our work applies state-of-the-art Bayesian models to unsupervised Acoustic Unit Discovery (AUD) in a real low-resource language scenario. We also show that Bayesian models can naturally integrate information from other resourceful languages by means of informative prior leading to more consistent discovered units. Finally, discovered acoustic units are used, either as the 1-best sequence or as a lattice, to perform word segmentation. Word segmentation results show that this Bayesian approach clearly outperforms a Segmental-DTW baseline on the same corpus. | {
"paragraphs": [
[
"Out of nearly 7000 languages spoken worldwide, current speech (ASR, TTS, voice search, etc.) technologies barely address 200 of them. Broadening ASR technologies to ideally all possible languages is a challenge with very high stakes in many areas and is at the heart of several fundamental research problems ranging from psycholinguistic (how humans learn to recognize speech) to pure machine learning (how to extract knowledge from unlabeled data). The present work focuses on the narrow but important problem of unsupervised Acoustic Unit Discovery (AUD). It takes place as the continuation of an ongoing effort to develop a Bayesian model suitable for this task, which stems from the seminal work of BIBREF0 later refined and made scalable in BIBREF1 . This model, while rather crude, has shown that it can provide a clustering accurate enough to be used in topic identification of spoken document in unknown languages BIBREF2 . It was also shown that this model can be further improved by incorporating a Bayesian \"phonotactic\" language model learned jointly with the acoustic units BIBREF3 . Finally, following the work in BIBREF4 it has been combined successfully with variational auto-encoders leading to a model combining the potential of both deep neural networks and Bayesian models BIBREF5 . The contribution of this work is threefold:"
],
[
"The AUD model described in BIBREF0 , BIBREF1 is a non-parametric Bayesian Hidden Markov Model (HMM). This model is topologically equivalent to a phone-loop model with two major differences:",
"In this work, we have used two variants of this original model. The first one (called HMM model in the remainder of this paper), following the analysis led in BIBREF8 , approximates the Dirichlet Process prior by a mere symmetric Dirichlet prior. This approximation, while retaining the sparsity constraint, avoids the complication of dealing with the variational treatment of the stick breaking process frequent in Bayesian non-parametric models. The second variant, which we shall denote Structured Variational AutoEncoder (SVAE) AUD, is based upon the work of BIBREF4 and embeds the HMM model into the Variational AutoEncoder framework BIBREF9 . A very similar version of the SVAE for AUD was developed independently and presented in BIBREF5 . The main noteworthy difference between BIBREF5 and our model is that we consider a fully Bayesian version of the HMM embedded in the VAE; and the posterior distribution and the VAE parameters are trained jointly using the Stochastic Variational Bayes BIBREF4 , BIBREF10 . For both variants, the prior over the HMM parameters were set to the conjugate of the likelihood density: Normal-Gamma prior for the mean and variance of the Gaussian components, symmetric Dirichlet prior over the HMM's state mixture's weights and symmetric Dirichlet prior over the acoustic units' weights. For the case of the uninformative prior, the prior was set to be vague prior with one pseudo-observation BIBREF11 ."
],
[
"Bayesian Inference differs from other machine learning techniques by introducing a distribution INLINEFORM0 over the parameters of the model. A major concern in Bayesian Inference is usually to define a prior that makes as little assumption as possible. Such a prior is usually known as uninformative prior. Having a completely uninformative prior has the practical advantage that the prior distribution will have a minimal impact on the outcome of the inference leading to a model which bases its prediction purely and solely on the data. In the present work, we aim at the opposite behavior, we wish our AUD model to learn phone-like units from the unlabeled speech data of a target language given the knowledge that was previously accumulated from another resourceful language. More formally, the original AUD model training consists in estimate the a posteriori distribution of the parameters given the unlabeled speech data of a target language INLINEFORM1 : DISPLAYFORM0 ",
"The parameters are divided into two subgroups INLINEFORM0 where INLINEFORM1 are the global parameters of the model, and INLINEFORM2 are the latent variables which, in our case, correspond to the sequences of acoustic units. The global parameters are separated into two independent subsets : INLINEFORM3 , corresponding to the acoustic parameters ( INLINEFORM4 ) and the \"phonotactic\" language model parameters ( INLINEFORM5 ). Replacing INLINEFORM6 and following the conditional independence of the variable induced by the model (see BIBREF1 for details) leads to: DISPLAYFORM0 ",
"If we further assume that we have at our disposal speech data in a different language than the target one, denoted INLINEFORM0 , along with its phonetic transcription INLINEFORM1 , it is then straightforward to show that: DISPLAYFORM0 ",
" which is the same as Eq. EQREF8 but for the distribution of the acoustic parameters which is based on the data of the resourceful language. In contrast of the term uninformative prior we denote INLINEFORM0 as an informative prior. As illustrated by Eq. EQREF9 , a characteristic of Bayesian inference is that it naturally leads to a sequential inference. Therefore, model training can be summarized as:",
"Practically, the computation of the informative prior as well as the final posterior distribution is intractable and we seek for an approximation by means of the well known Variational Bayes Inference BIBREF12 . The approximate informative prior INLINEFORM0 is estimated by optimizing the variational lower bound of the evidence of the prior data INLINEFORM1 : DISPLAYFORM0 ",
"where INLINEFORM0 is the Kullback-Leibler divergence. Then, the posterior distribution of the parameters given the target data INLINEFORM1 can be estimated by optimizing the evidence of the target data INLINEFORM2 : DISPLAYFORM0 ",
"Note that when the model is trained with an uninformative prior the loss function is the as in Eq. EQREF13 but with INLINEFORM0 instead of the INLINEFORM1 . For the case of the uninformative prior, the Variational Bayes Inference was initialized as described in BIBREF1 . In the informative prior case, we initialized the algorithm by setting INLINEFORM2 ."
],
[
"We used the Mboshi5K corpus BIBREF13 as a test set for all the experiments reported here. Mboshi (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. It is one of the languages documented by the BULB (Breaking the Unwritten Language Barrier) project BIBREF14 . This speech dataset was collected following a real language documentation scenario, using Lig_Aikuma, a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets BIBREF15 . The corpus is multilingual (5130 Mboshi speech utterances aligned to French text) and contains linguists' transcriptions in Mboshi (in the form of a non-standard graphemic form close to the language phonology). It is also enriched with automatic forced-alignment between speech and transcriptions. The dataset is made available to the research community. More details on this corpus can be found in BIBREF13 .",
"TIMIT is also used as an extra speech corpus to train the informative prior. We used two different set of features: the mean normalized MFCC + INLINEFORM0 + INLINEFORM1 generated by HTK and the Multilingual BottleNeck (MBN) features BIBREF16 trained on the Czech, German, Portuguese, Russian, Spanish, Turkish and Vietnamese data of the Global Phone database."
],
[
"To evaluate our work we measured how the discovered units compared to the forced aligned phones in term of segmentation and information. The accuracy of the segmentation was measured in term of Precision, Recall and F-score. If a unit boundary occurs at the same time (+/- 10ms) of an actual phone boundary it is considered as a true positive, otherwise it is considered to be a false positive. If no match is found with a true phone boundary, this is considered to be a false negative. The consistency of the units was evaluated in term of normalized mutual information (NMI - see BIBREF1 , BIBREF3 , BIBREF5 for details) which measures the statistical dependency between the units and the forced aligned phones. A NMI of 0 % means that the units are completely independent of the phones whereas a NMI of 100 % indicates that the actual phones could be retrieved without error given the sequence of discovered units."
],
[
"In order to provide an extrinsic metric to evaluate the quality of the acoustic units discovered by our different methods, we performed an unsupervised word segmentation task on the acoustic units sequences, and evaluated the accuracy of the discovered word boundaries. We also wanted to experiment using lattices as an input for the word segmentation task, instead of using single sequences of units, so as to better mitigate the uncertainty of the AUD task and provide a companion metric that would be more robust to noise. A model capable of performing word segmentation both on lattices and text sequences was introduced by BIBREF6 . Building on the work of BIBREF17 , BIBREF18 they combine a nested hierarchical Pitman-Yor language model with a Weighted Finite State Transducer approach. Both for lattices and acoustic units sequences, we use the implementation of the authors with a bigram language model and a unigram character model. Word discovery is evaluated using the Boundary metric from the Zero Resource Challenge 2017 BIBREF20 and BIBREF21 . This metric measures the quality of a word segmentation and the discovered boundaries with respect to a gold corpus (Precision, Recall and F-score are computed)."
],
[
"First, we evaluated the standard HMM model with an uninformative prior (this will be our baseline) for the two different input features: MFCC (and derivatives) and MBN. Results are shown in Table TABREF20 . Surprisingly, the MBN features perform relatively poorly compared to the standard MFCC. These results are contradictory to those reported in BIBREF3 . Two factors may explain this discrepancy: the Mboshi5k data being different from the training data of the MBN neural network, the neural network may not generalize well. Another possibility may be that the initialization scheme of the model is not suitable for this type of features. Indeed, Variational Bayesian Inference algorithm converges only to a local optimum of the objective function and is therefore dependent of the initialization. We believe the second explanation is the more likely since, as we shall see shortly, the best results in term of word segmentation and NMI are eventually obtained with the MBN features when the inference is done with the informative prior. Next, we compared the HMM and the SVAE models when trained with an uninformative prior (lines with \"Inf. Prior\" set to \"no\" in Table TABREF23 ). The SVAE significantly improves the NMI and the precision showing that it extracts more consistent units than the HMM model. However, it also degrades the segmentation in terms of recall. We further investigated this behavior by looking at the duration of the units found by both models compared to the true phones (Table TABREF22 ). We observe that the SVAE model favors longer units than the HMM model hence leading to fewer boundaries and consequently smaller recall.",
"We then evaluated the effect of the informative prior on the acoustic unit discovery (Table TABREF23 ). On all 4 combinations (2 features sets INLINEFORM0 2 models) we observe an improvement in terms of precision and NMI but a degradation of the recall. This result is encouraging since the informative prior was trained on English data (TIMIT) which is very different from Mboshi. Indeed, this suggests that even speech from an unrelated language can be of some help in the design of an ASR for a very low resource language. Finally, similarly to the SVAE/HMM case described above, we found that the degradation of the recall is due to longer units discovered for models with an informative prior (numbers omitted due to lack of space).",
"Word discovery results are given in Table TABREF21 for the Boundary metric BIBREF20 , BIBREF21 . We observe that i) the best word boundary detection (F-score) is obtained with MBN features, an informative prior and the SVAE model; this confirms the results of table TABREF23 and shows that better AUD leads to better word segmentation ii) word segmentation from AUD graph Lattices is slightly better than from flat sequences of AUD symbols (1-best); iii) our results outperform a pure speech based baseline based on segmental DTW BIBREF22 (F-score of 19.3% on the exact same corpus)."
],
[
"We have conducted an analysis of the state-of-the-art Bayesian approach for acoustic unit discovery on a real case of low-resource language. This analysis was focused on the quality of the discovered units compared to the gold standard phone alignments. Outcomes of the analysis are i) the combination of neural network and Bayesian model (SVAE) yields a significant improvement in the AUD in term of consistency ii) Bayesian models can naturally embed information from a resourceful language and consequently improve the consistency of the discovered units. Finally, we hope this work can serve as a baseline for future research on unsupervised acoustic unit discovery in very low resource scenarios."
],
[
"This work was started at JSALT 2017 in CMU, Pittsburgh, and was supported by JHU and CMU (via grants from Google, Microsoft, Amazon, Facebook, Apple), by the Czech Ministry of Education, Youth and Sports from the National Programme of Sustainability (NPU II) project \"IT4Innovations excellence in science - LQ1602\" and by the French ANR and the German DFG under grant ANR-14-CE35-0002 (BULB project). This work used the Extreme Science and Engineering Discovery Environment (NSF grant number OCI-1053575 and NSF award number ACI-1445606)."
]
],
"section_name": [
"Introduction",
"Models",
"Informative Prior",
"Corpora and acoustic features",
"Acoustic unit discovery (AUD) evaluation",
"Extension to word discovery",
"Results and Discussion",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"d89b964a87a077b54e1a87ac112823826931a7e2",
"e4c5b9c8816d1ea67af2d0a2389c5c8834da2a25"
],
"answer": [
{
"evidence": [
"Word discovery results are given in Table TABREF21 for the Boundary metric BIBREF20 , BIBREF21 . We observe that i) the best word boundary detection (F-score) is obtained with MBN features, an informative prior and the SVAE model; this confirms the results of table TABREF23 and shows that better AUD leads to better word segmentation ii) word segmentation from AUD graph Lattices is slightly better than from flat sequences of AUD symbols (1-best); iii) our results outperform a pure speech based baseline based on segmental DTW BIBREF22 (F-score of 19.3% on the exact same corpus).",
"FLOAT SELECTED: Table 4: Effect of the informative prior on AUD (phone boundary detection) - Mboshi5k corpus"
],
"extractive_spans": [],
"free_form_answer": "18.08 percent points on F-score",
"highlighted_evidence": [
"We observe that i) the best word boundary detection (F-score) is obtained with MBN features, an informative prior and the SVAE model; this confirms the results of table TABREF23 and shows that better AUD leads to better word segmentation ii) word segmentation from AUD graph Lattices is slightly better than from flat sequences of AUD symbols (1-best); iii) our results outperform a pure speech based baseline based on segmental DTW BIBREF22 (F-score of 19.3% on the exact same corpus).",
"FLOAT SELECTED: Table 4: Effect of the informative prior on AUD (phone boundary detection) - Mboshi5k corpus"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2f2449ed82833eb924fd9bfb9cae3378bcd70f54",
"47042b2fed02961324290f97afad43501b3d436d"
],
"answer": [
{
"evidence": [
"We used the Mboshi5K corpus BIBREF13 as a test set for all the experiments reported here. Mboshi (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. It is one of the languages documented by the BULB (Breaking the Unwritten Language Barrier) project BIBREF14 . This speech dataset was collected following a real language documentation scenario, using Lig_Aikuma, a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets BIBREF15 . The corpus is multilingual (5130 Mboshi speech utterances aligned to French text) and contains linguists' transcriptions in Mboshi (in the form of a non-standard graphemic form close to the language phonology). It is also enriched with automatic forced-alignment between speech and transcriptions. The dataset is made available to the research community. More details on this corpus can be found in BIBREF13 ."
],
"extractive_spans": [
"5130"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus is multilingual (5130 Mboshi speech utterances aligned to French text) and contains linguists' transcriptions in Mboshi (in the form of a non-standard graphemic form close to the language phonology)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used the Mboshi5K corpus BIBREF13 as a test set for all the experiments reported here. Mboshi (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. It is one of the languages documented by the BULB (Breaking the Unwritten Language Barrier) project BIBREF14 . This speech dataset was collected following a real language documentation scenario, using Lig_Aikuma, a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets BIBREF15 . The corpus is multilingual (5130 Mboshi speech utterances aligned to French text) and contains linguists' transcriptions in Mboshi (in the form of a non-standard graphemic form close to the language phonology). It is also enriched with automatic forced-alignment between speech and transcriptions. The dataset is made available to the research community. More details on this corpus can be found in BIBREF13 ."
],
"extractive_spans": [
"5130 Mboshi speech utterances"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus is multilingual (5130 Mboshi speech utterances aligned to French text) and contains linguists' transcriptions in Mboshi (in the form of a non-standard graphemic form close to the language phonology)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ab4b4f7f89f98e0fd8f7c68515a90c4907ea8789",
"df1c204bdcd8228525c7509378dcd75b17a3bce9"
],
"answer": [
{
"evidence": [
"The AUD model described in BIBREF0 , BIBREF1 is a non-parametric Bayesian Hidden Markov Model (HMM). This model is topologically equivalent to a phone-loop model with two major differences:",
"In this work, we have used two variants of this original model. The first one (called HMM model in the remainder of this paper), following the analysis led in BIBREF8 , approximates the Dirichlet Process prior by a mere symmetric Dirichlet prior. This approximation, while retaining the sparsity constraint, avoids the complication of dealing with the variational treatment of the stick breaking process frequent in Bayesian non-parametric models. The second variant, which we shall denote Structured Variational AutoEncoder (SVAE) AUD, is based upon the work of BIBREF4 and embeds the HMM model into the Variational AutoEncoder framework BIBREF9 . A very similar version of the SVAE for AUD was developed independently and presented in BIBREF5 . The main noteworthy difference between BIBREF5 and our model is that we consider a fully Bayesian version of the HMM embedded in the VAE; and the posterior distribution and the VAE parameters are trained jointly using the Stochastic Variational Bayes BIBREF4 , BIBREF10 . For both variants, the prior over the HMM parameters were set to the conjugate of the likelihood density: Normal-Gamma prior for the mean and variance of the Gaussian components, symmetric Dirichlet prior over the HMM's state mixture's weights and symmetric Dirichlet prior over the acoustic units' weights. For the case of the uninformative prior, the prior was set to be vague prior with one pseudo-observation BIBREF11 ."
],
"extractive_spans": [
"Structured Variational AutoEncoder (SVAE) AUD",
"Bayesian Hidden Markov Model (HMM)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The AUD model described in BIBREF0 , BIBREF1 is a non-parametric Bayesian Hidden Markov Model (HMM).",
"In this work, we have used two variants of this original model. The first one (called HMM model in the remainder of this paper), following the analysis led in BIBREF8 , approximates the Dirichlet Process prior by a mere symmetric Dirichlet prior.",
"The second variant, which we shall denote Structured Variational AutoEncoder (SVAE) AUD, is based upon the work of BIBREF4 and embeds the HMM model into the Variational AutoEncoder framework BIBREF9 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The AUD model described in BIBREF0 , BIBREF1 is a non-parametric Bayesian Hidden Markov Model (HMM). This model is topologically equivalent to a phone-loop model with two major differences:"
],
"extractive_spans": [
"non-parametric Bayesian Hidden Markov Model"
],
"free_form_answer": "",
"highlighted_evidence": [
"The AUD model described in BIBREF0 , BIBREF1 is a non-parametric Bayesian Hidden Markov Model (HMM)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"039d955150ef8cfc7470822ad0f535809776fc94",
"ab0d234c653939fa6fea2c006bf7072e2c0633a3"
],
"answer": [
{
"evidence": [
"We used the Mboshi5K corpus BIBREF13 as a test set for all the experiments reported here. Mboshi (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. It is one of the languages documented by the BULB (Breaking the Unwritten Language Barrier) project BIBREF14 . This speech dataset was collected following a real language documentation scenario, using Lig_Aikuma, a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets BIBREF15 . The corpus is multilingual (5130 Mboshi speech utterances aligned to French text) and contains linguists' transcriptions in Mboshi (in the form of a non-standard graphemic form close to the language phonology). It is also enriched with automatic forced-alignment between speech and transcriptions. The dataset is made available to the research community. More details on this corpus can be found in BIBREF13 ."
],
"extractive_spans": [
"Mboshi "
],
"free_form_answer": "",
"highlighted_evidence": [
"Mboshi (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used the Mboshi5K corpus BIBREF13 as a test set for all the experiments reported here. Mboshi (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. It is one of the languages documented by the BULB (Breaking the Unwritten Language Barrier) project BIBREF14 . This speech dataset was collected following a real language documentation scenario, using Lig_Aikuma, a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets BIBREF15 . The corpus is multilingual (5130 Mboshi speech utterances aligned to French text) and contains linguists' transcriptions in Mboshi (in the form of a non-standard graphemic form close to the language phonology). It is also enriched with automatic forced-alignment between speech and transcriptions. The dataset is made available to the research community. More details on this corpus can be found in BIBREF13 ."
],
"extractive_spans": [
"Mboshi (Bantu C25)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the Mboshi5K corpus BIBREF13 as a test set for all the experiments reported here. Mboshi (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. It is one of the languages documented by the BULB (Breaking the Unwritten Language Barrier) project BIBREF14 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"By how much they outperform the baseline?",
"How long are the datasets?",
"What bayesian model is trained?",
"What low resource languages are considered?"
],
"question_id": [
"f8edc911f9e16559506f3f4a6bda74cde5301a9a",
"8c288120139615532838f21094bba62a77f92617",
"a464052fd11af1d2d99e407c11791269533d43d1",
"5f6c1513cbda9ae711bc38df08fe72e3d3028af2"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: AUD results of the baseline (HMM model with uninformative prior) - Mboshi5k corpus",
"Table 2: Precision, Recall and F-measure on word boundaries, using different AUD methods. Segmental DTW baseline [23] gave F-score of 19.3% on the exact same corpus; dpseg [20] was also used as a word segmentation baseline and gave similar (slightly lower) F-scores to 1-best (best config with dpseg gave 42.5%) - Mboshi5k corpus",
"Table 3: Average duration of the (AUD) units (AUD) for the HMM and SVAE models trained with an uninformative prior. ”phones” refers to the forced aligned phone reference.",
"Table 4: Effect of the informative prior on AUD (phone boundary detection) - Mboshi5k corpus"
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png"
]
} | [
"By how much they outperform the baseline?"
] | [
[
"1802.06053-Results and Discussion-2",
"1802.06053-5-Table4-1.png"
]
] | [
"18.08 percent points on F-score"
] | 228 |
1909.00871 | It's All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution | This paper treats gender bias latent in word embeddings. Previous mitigation attempts rely on the operationalisation of gender bias as a projection over a linear subspace. An alternative approach is Counterfactual Data Augmentation (CDA), in which a corpus is duplicated and augmented to remove bias, e.g. by swapping all inherently-gendered words in the copy. We perform an empirical comparison of these approaches on the English Gigaword and Wikipedia, and find that whilst both successfully reduce direct bias and perform well in tasks which quantify embedding quality, CDA variants outperform projection-based methods at the task of drawing non-biased gender analogies by an average of 19% across both corpora. We propose two improvements to CDA: Counterfactual Data Substitution (CDS), a variant of CDA in which potentially biased text is randomly substituted to avoid duplication, and the Names Intervention, a novel name-pairing technique that vastly increases the number of words being treated. CDA/S with the Names Intervention is the only approach which is able to mitigate indirect gender bias: following debiasing, previously biased words are significantly less clustered according to gender (cluster purity is reduced by 49%), thus improving on the state-of-the-art for bias mitigation. | {
"paragraphs": [
[
"Gender bias describes an inherent prejudice against a gender, captured both by individuals and larger social systems. Word embeddings, a popular machine-learnt semantic space, have been shown to retain gender bias present in corpora used to train them BIBREF0. This results in gender-stereotypical vector analogies à la NIPS20135021, such as man:computer programmer :: woman:homemaker BIBREF1, and such bias has been shown to materialise in a variety of downstream tasks, e.g. coreference resolution BIBREF2, BIBREF3.",
"By operationalising gender bias in word embeddings as a linear subspace, DBLP:conf/nips/BolukbasiCZSK16 are able to debias with simple techniques from linear algebra. Their method successfully mitigates [author=simone,color=blue!40,size=,fancyline,caption=,]does not particularly like boldfacing for emphasis, but can live with.direct bias: man is no longer more similar to computer programmer in vector space than woman. However, the structure of gender bias in vector space remains largely intact, and the new vectors still evince indirect bias: associations which result from gender bias between not explicitly gendered words, for example a possible association between football and business resulting from their mutual association with explicitly masculine words BIBREF4. In this paper we continue the work of BIBREF4, and show that another paradigm for gender bias mitigation proposed by BIBREF5, Counterfactual Data Augmentation (CDA), is also unable to mitigate indirect bias. We also show, using a new test we describe (non-biased gender analogies), that WED might be removing too much gender information, casting further doubt on its operationalisation of gender bias as a linear subspace.",
"To improve CDA we make two proposals. The first, Counterfactual Data Substitution (CDS), is designed to avoid text duplication in favour of substitution. The second, the Names Intervention, is a method which can be applied to either CDA or CDS, and treats bias inherent in first names. It does so using a novel name pairing strategy that accounts for both name frequency and gender-specificity. Using our improvements, the clusters of the most biased words exhibit a reduction of cluster purity by an average of 49% across both corpora following treatment, thereby offering a partial solution to the problem of indirect bias as formalised by BIBREF4. [author=simone,color=blue!40,size=,fancyline,caption=,]first part of reaction to reviewer 4Additionally, although one could expect that the debiased embeddings might suffer performance losses in computational linguistic tasks, our embeddings remain useful for at least two such tasks, word similarity and sentiment classification BIBREF6."
],
[
"The measurement and mitigation of gender bias relies on the chosen operationalisation of gender bias. As a direct consequence, how researchers choose to operationalise bias determines both the techniques at one's disposal to mitigate the bias, as well as the yardstick by which success is determined."
],
[
"One popular method for the mitigation of gender bias, introduced by DBLP:conf/nips/BolukbasiCZSK16, measures the genderedness of words by the extent to which they point in a gender direction. Suppose we embed our words into $\\mathbb {R}^d$. The fundamental assumption is that there exists a linear subspace $B \\subset \\mathbb {R}^d$ that contains (most of) the gender bias in the space of word embeddings. (Note that $B$ is a direction when it is a single vector.) We term this assumption the gender subspace hypothesis. Thus, by basic linear algebra, we may decompose any word vector $\\mathbf {v}\\in \\mathbb {R}^d$ as the sum of the projections onto the bias subspace and its complement: $\\mathbf {v}= \\mathbf {v}_{B} + \\mathbf {v}_{\\perp B}$. The (implicit) operationalisation of gender bias under this hypothesis is, then, the magnitiude of the bias vector $||\\mathbf {v}_{B}||_2$.",
"To capture $B$, BIBREF1 first construct two sets, ${\\cal D}_{\\textit {male}}$ and ${\\cal D}_{\\textit {female}}$ containing the male- and female-oriented pairs, using a set of gender-definitional pairs, e.g., man–woman and husband–wife. They then define ${\\cal D}= {\\cal D}_{\\textit {male}}\\cup {\\cal D}_{\\textit {female}}$ as the union of the two sets. They compute the empirical covariance matrix",
"where $\\mu $ is the mean embeddings of the words in ${\\cal D}$, then $B$ is taken to be the $k$ eigenvectors of $C$ associated with the largest eigenvalues. BIBREF1 set $k=1$, and thus define a gender direction.",
"Using this operalisation of gender bias, BIBREF1 go on to provide a linear-algebraic method (Word Embedding Debiasing, WED, originally “hard debiasing”) to remove gender bias in two phases: first, for non-gendered words, the gender direction is removed (“neutralised”). Second, pairs of gendered words such as mother and father are made equidistant to all non-gendered words (“equalised”). Crucially, under the gender subspace hypothesis, it is only necessary to identify the subspace $B$ as it is possible to perfectly remove the bias under this operationalisation using tools from numerical linear algebra.",
"The method uses three sets of words or word pairs: 10 definitional pairs (used to define the gender direction), 218 gender-specific seed words (expanded to a larger set using a linear classifier, the compliment of which is neutralised in the first step), and 52 equalise pairs (equalised in the second step). The relationships among these sets are illustrated in Figure FIGREF3; for instance, gender-neutral words are defined as all words in an embedding that are not gender-specific.",
"BIBREF1 find that this method results in a 68% reduction of stereotypical analogies as identified by human judges. However, bias is removed only insofar as the operationalisation allows. In a comprehensive analysis, hila show that the original structure of bias in the WED embedding space remains intact."
],
[
"As an alternative to WED, BIBREF5 propose Counterfactual Data Augmentation (CDA), in which a text transformation designed to invert bias is performed on a text corpus, the result of which is then appended to the original, to form a new bias-mitigated corpus used for training embeddings. Several interventions are proposed: in the simplest, occurrences of words in 124 gendered word pairs are swapped. For example, `the woman cleaned the kitchen' would (counterfactually) become `the man cleaned the kitchen' as man–woman is on the list. Both versions would then together be used in embedding training, in effect neutralising the man–woman bias.",
"The grammar intervention, BIBREF5's improved intervention, uses coreference information to veto swapping gender words when they corefer to a proper noun. This avoids Elizabeth ...she ...queen being changed to, for instance, Elizabeth ...he ...king. It also uses POS information to avoid ungrammaticality related to the ambiguity of her between personal pronoun and possessive determiner. In the context, `her teacher was proud of her', this results in the correct sentence `his teacher was proud of him'."
],
[
"We prefer the philosophy of CDA over WED as it makes fewer assumptions about the operationalisation of the bias it is meant to mitigate."
],
[
"The duplication of text which lies at the heart of CDA will produce debiased corpora with peculiar statistical properties unlike those of naturally occurring text. Almost all observed word frequencies will be even, with a notable jump from 2 directly to 0, and a type–token ratio far lower than predicted by Heaps' Law for text of this length. The precise effect this will have on the resulting embedding space is hard to predict, but we assume that it is preferable not to violate the fundamental assumptions of the algorithms used to create embeddings. As such, we propose to apply substitutions probabilistically (with 0.5 probability), which results in a non-duplicated counterfactual training corpus, a method we call Counterfactual Data Substitution (CDS). Substitutions are performed on a per-document basis in order to maintain grammaticality and discourse coherence. This simple change should have advantages in terms of naturalness of text and processing efficiency, as well as theoretical foundation."
],
[
"Our main technical contribution in this paper is to provide a method for better counterfactual augmentation, which is based on bipartite-graph matching of names. Instead of Lu et. al's (2018) solution of not treating words which corefer to proper nouns in order to maintain grammaticality, we propose an explicit treatment of first names. This is because we note that as a result of not swapping the gender of words which corefer with proper nouns, CDA could in fact reinforce certain biases instead of mitigate them. Consider the sentence `Tom ...He is a successful and powerful executive.' Since he and Tom corefer, the counterfactual corpus copy will not replace he with she in this instance, and as the method involves a duplication of text, this would result in a stronger, not weaker, association between he and gender-stereotypic concepts present like executive. Even under CDS, this would still mean that biased associations are left untreated (albeit at least not reinforced). Treating names should in contrast effect a real neutralisation of bias, with the added bonus that grammaticality is maintained without the need for coreference resolution.",
"The United States Social Security Administration (SSA) dataset contains a list of all first names from Social Security card applications for births in the United States after 1879, along with their gender. Figure FIGREF8 plots a few example names according to their male and female occurrences, and shows that names have varying degrees of gender-specificity.",
"We fixedly associate pairs of names for swapping, thus expanding BIBREF5's short list of gender pairs vastly. Clearly both name frequency and the degree of gender-specificity are relevant to this bipartite matching. If only frequency were considered, a more gender-neutral name (e.g. Taylor) could be paired with a very gender-specific name (e.g. John), which would negate the gender intervention in many cases (namely whenever a male occurrence of Taylor is transformed into John, which would also result in incorrect pronouns, if present). If, on the other hand, only the degree of gender-specificity were considered, we would see frequent names (like James) being paired with far less frequent names (like Sybil), which would distort the overall frequency distribution of names. This might also result in the retention of a gender signal: for instance, swapping a highly frequent male name with a rare female name might simply make the rare female name behave as a new link between masculine contexts (instead of the original male name), as it rarely appears in female contexts.",
"Figure FIGREF13 shows a plot of various names' number of primary gender occurances against their secondary gender occurrences, with red dots for primary-male and blue crosses for primary-female names. The problem of finding name-pairs thus decomposes into a Euclidean-distance bipartite matching problem, which can be solved using the Hungarian method BIBREF7. We compute pairs for the most frequent 2500 names of each gender in the SSA dataset. There is also the problem that many names are also common nouns (e.g. Amber, Rose, or Mark), which we solve using Named Entity Recognition."
],
[
"We compare eight variations of the mitigation methods. CDA is our reimplementation of BIBREF5's (BIBREF5) naïve intervention, gCDA uses their grammar intervention, and nCDA uses our new Names Intervention. gCDS and nCDS are variants of the grammar and Names Intervention using CDS. WED40 is our reimplementation of BIBREF1's (BIBREF1) method, which (like the original) uses a single component to define the gender subspace, accounting for $>40\\%$ of variance. As this is much lower than in the original paper (where it was 60%, reproduced in Figure FIGREF18), we define a second space, WED70, which uses a 2D subspace accounting for $>70\\%$ of variance. To test whether WED profits from additional names, we use the 5000 paired names in the names gazetteer as additional equalise pairs (nWED70). As control, we also evaluate the unmitigated space (none).",
"We perform an empirical comparison of these bias mitigation techniques on two corpora, the Annotated English Gigaword BIBREF8 and Wikipedia. Wikipedia is of particular interest, since though its Neutral Point of View (NPOV) policy predicates that all content should be presented without bias, women are nonetheless less likely to be deemed “notable” than men of equal stature BIBREF9, and there are differences in the choice of language used to describe them BIBREF10, BIBREF11. We use the annotation native to the Annotated English Gigaword, and process Wikipedia with CoreNLP (statistical coreference; bidirectional tagger). Embeddings are created using Word2Vec. We use the original complex lexical input (gender-word pairs and the like) for each algorithm as we assume that this benefits each algorithm most. [author=simone,color=blue!40,size=,fancyline,caption=,]I am not 100% sure of which \"expansion\" you are talking about here. The classifier Bolucbasi use maybe?[author=rowan,color=green!40,size=,fancyline,caption=,]yup - clarified Expanding the set of gender-specific words for WED (following BIBREF1, using a linear classifier) on Gigaword resulted in 2141 such words, 7146 for Wikipedia.",
"In our experiments, we test the degree to which the spaces are successful at mitigating direct and indirect bias, as well as the degree to which they can still be used in two NLP tasks standardly performed with embeddings, word similarity and sentiment classification. We also introduce one further, novel task, which is designed to quantify how well the embedding spaces capture an understanding of gender using non-biased analogies. Our evaluation matrix and methodology is expanded below."
],
[
"BIBREF0 introduce the Word Embedding Association Test (WEAT), which provides results analogous to earlier psychological work by BIBREF12 by measuring the difference in relative similarity between two sets of target words $X$ and $Y$ and two sets of attribute words $A$ and $B$. We compute Cohen's $d$ (a measure of the difference in relative similarity of the word sets within each embedding; higher is more biased), and a one-sided $p$-value which indicates whether the bias detected by WEAT within each embedding is significant (the best outcome being that no such bias is detectable). We do this for three tests proposed by BIBREF13 which measure the strength of various gender stereotypes: art–maths, arts–sciences, and careers–family."
],
[
"To demonstrate indirect gender bias we adapt a pair of methods proposed by BIBREF4. First, we test whether the most-biased words prior to bias mitigation remain clustered following bias mitigation. To do this, we define a new subspace, $\\vec{b}_\\text{test}$, using the 23 word pairs used in the Google Analogy family test subset BIBREF14 following BIBREF1's (BIBREF1) method, and determine the 1000 most biased words in each corpus (the 500 words most similar to $\\vec{b}_\\text{test}$ and $-\\vec{b}_\\text{test}$) in the unmitigated embedding. For each debiased embedding we then project these words into 2D space with tSNE BIBREF15, compute clusters with k-means, and calculate the clusters' V-measure BIBREF16. Low values of cluster purity indicate that biased words are less clustered following bias mitigation.",
"Second, we test whether a classifier can be trained to reclassify the gender of debiased words. If it succeeds, this would indicate that bias-information still remains in the embedding. We trained an RBF-kernel SVM classifier on a random sample of 1000 out of the 5000 most biased words from each corpus using $\\vec{b}_\\text{test}$ (500 from each gender), then report the classifier's accuracy when reclassifying the remaining 4000 words."
],
[
"The quality of a space is traditionally measured by how well it replicates human judgements of word similarity. The SimLex-999 dataset BIBREF17 provides a ground-truth measure of similarity produced by 500 native English speakers. Similarity scores in an embedding are computed as the cosine angle between word-vector pairs, and Spearman correlation between embedding and human judgements are reported. We measure correlative significance at $\\alpha = 0.01$."
],
[
"Following BIBREF6, we use a standard sentiment classification task to quantify the downstream performance of the embedding spaces when they are used as a pretrained word embedding input BIBREF18 to Doc2Vec on the Stanford Large Movie Review dataset. The classification is performed by an SVM classifier using the document embeddings as features, trained on 40,000 labelled reviews and tested on the remaining 10,000 documents, reported as error percentage."
],
[
"When proposing WED, BIBREF1 use human raters to class gender-analogies as either biased (woman:housewife :: man:shopkeeper) or appropriate (woman:grandmother :: man::grandfather), and postulate that whilst biased analogies are undesirable, appropriate ones should remain. Our new analogy test uses the 506 analogies in the family analogy subset of the Google Analogy Test set BIBREF14 to define many such appropriate analogies that should hold even in a debiased environment, such as boy:girl :: nephew:niece. We use a proportional pair-based analogy test, which measures each embedding's performance when drawing a fourth word to complete each analogy, and report error percentage."
],
[
"Table TABREF27 presents the $d$ scores and WEAT one-tailed $p$-values, which indicate whether the difference in samples means between targets $X$ and $Y$ and attributes $A$ and $B$ is significant. We also compute a two-tailed $p$-value to determine whether the difference between the various sets is significant.",
"On Wikipedia, nWED70 outperforms every other method ($p<0.01$), and even at $\\alpha =0.1$ bias was undetectable. In all CDA/S variants, the Names Intervention performs significantly better than other intervention strategies (average $d$ for nCDS across all tests 0.95 vs. 1.39 for the best non-names CDA/S variants). Excluding the Wikipedia careers–family test (in which the CDA and CDS variants are indistinguishable at $\\alpha =0.01$), the CDS variants are numerically better than their CDA counterparts in 80% of the test cases, although many of these differences are not significant. Generally, we notice a trend of WED reducing direct gender bias slightly better than CDA/S. Impressively, WED even successfully reduces bias in the careers–family test, where gender information is captured by names, which were not in WED's gender-equalise word-pair list for treatment."
],
[
"Figure FIGREF30 shows the V-measures of the clusters of the most biased words in Wikipedia for each embedding. Gigaword patterns similarly (see appendix). Figure FIGREF31 shows example tSNE projections for the Gigaword embeddings (“$\\mathrm {V}$” refers to their V-measures; these examples were chosen as they represent the best results achieved by BIBREF1's (BIBREF1) method, BIBREF5's (BIBREF5) method, and our new names variant). On both corpora, the new nCDA and nCDS techniques have significantly lower purity of biased-word cluster than all other evaluated mitigation techniques (0.420 for nCDS on Gigaword, which corresponds to a reduction of purity by 58% compared to the unmitigated embedding, and 0.609 (39%) on Wikipedia). nWED70's V-Measure is significantly higher than either of the other Names variants (reduction of 11% on Gigaword, only 1% on Wikipedia), suggesting that the success of nCDS and nCDA is not merely due to their larger list of gender-words.",
"Figure FIGREF33 shows the results of the second test of indirect bias, and reports the accuracy of a classifier trained to reclassify previously gender biased words on the Wikipedia embeddings (Gigaword patterns similarly). These results reinforce the finding of the clustering experiment: once again, nCDS outperforms all other methods significantly on both corpora ($p<0.01$), although it should be noted that the successful reclassification rate remains relatively high (e.g. 88.9% on Wikipedia).",
"We note that nullifying indirect bias associations entirely is not necessarily the goal of debiasing, since some of these may result from causal links in the domain. For example, whilst associations between man and engineer and between man and car are each stereotypic (and thus could be considered examples of direct bias), an association between engineer and car might well have little to do with gender bias, and so should not be mitigated."
],
[
"Table TABREF35 reports the SimLex-999 Spearman rank-order correlation coefficients $r_s$ (all are significant, $p<0.01$). Surprisingly, the WED40 and 70 methods outperform the unmitigated embedding, although the difference in result is small (0.386 and 0.395 vs. 0.385 on Gigaword, 0.371 and 0.367 vs. 0.368 on Wikipedia). nWED70, on the other hand, performs worse than the unmitigated embedding (0.384 vs. 0.385 on Gigaword, 0.367 vs. 0.368 on Wikipedia). CDA and CDS methods do not match the quality of the unmitigated space, but once again the difference is small. [author=simone,color=blue!40,size=,fancyline,caption=,]Second Part of Reaction to Reviewer 4.It should be noted that since SimLex-999 was produced by human raters, it will reflect the human biases these methods were designed to remove, so worse performance might result from successful bias mitigation."
],
[
"Figure FIGREF37 shows the sentiment classification error rates for Wikipedia (Gigaword patterns similarly). Results are somewhat inconclusive. While WED70 significantly improves the performance of the sentiment classifier from the unmitigated embedding on both corpora ($p<0.05$), the improvement is small (never more than 1.1%). On both corpora, nothing outperforms WED70 or the Names Intervention variants."
],
[
"Figure FIGREF39 shows the error rates for non-biased gender analogies for Wikipedia. CDA and CDS are numerically better than the unmitigated embeddings (an effect which is always significant on Gigaword, shown in the appendices, but sometimes insignificant on Wikipedia). The WED variants, on the other hand, perform significantly worse than the unmitigated sets on both corpora (27.1 vs. 9.3% for the best WED variant on Gigaword; 18.8 vs. 8.7% on Wikipedia). WED thus seems to remove too much gender information, whilst CDA and CDS create an improved space, perhaps because they reduce the effect of stereotypical associations which were previously used incorrectly when drawing analogies."
],
[
"We have replicated two state-of-the-art bias mitigation techniques, WED and CDA, on two large corpora, Wikipedia and the English Gigaword. In our empirical comparison, we found that although both methods mitigate direct gender bias and maintain the interpretability of the space, WED failed to maintain a robust representation of gender (the best variants had an error rate of 23% average when drawing non-biased analogies, suggesting that too much gender information was removed). A new variant of CDA we propose (the Names Intervention) is the only to successfully mitigate indirect gender bias: following its application, previously biased words are significantly less clustered according to gender, with an average of 49% reduction in cluster purity when clustering the most biased words. We also proposed Counterfactual Data Substitution, which generally performed better than the CDA equivalents, was notably quicker to compute (as Word2Vec is linear in corpus size), and in theory allows for multiple intervention layers without a corpus becoming exponentially large.",
"A fundamental limitation of all the methods compared is their reliance on predefined lists of gender words, in particular of pairs. BIBREF5's pairs of manager::manageress and murderer::murderess may be counterproductive, as their augmentation method perpetuates a male reading of manager, which has become gender-neutral over time. Other issues arise from differences in spelling (e.g. mum vs. mom) and morphology (e.g. his vs. her and hers). Biologically-rooted terms like breastfeed or uterus do not lend themselves to pairing either. The strict use of pairings also imposes a gender binary, and as a result non-binary identities are all but ignored in the bias mitigation literature. [author=rowan,color=green!40,size=,fancyline,caption=,]added this para back in and chopped it up a bit, look okay?",
"Future work could extend the Names Intervention to names from other languages beyond the US-based gazetteer used here. Our method only allows for there to be an equal number of male and female names, but if this were not the case one ought to explore the possibility of a many-to-one mapping, or perhaps a probablistic approach (though difficulties would be encountered sampling simultaneously from two distributions, frequency and gender-specificity). A mapping between nicknames (not covered by administrative sources) and formal names could be learned from a corpus for even wider coverage, possibly via the intermediary of coreference chains. Finally, given that names have been used in psychological literature as a proxy for race (e.g. BIBREF12), the Names Intervention could also be used to mitigate racial biases (something which, to the authors' best knowledge, has never been attempted), but finding pairings could prove problematic. It is important that other work looks into operationalising bias beyond the subspace definition proposed by BIBREF1, as it is becoming increasingly evident that gender bias is not linear in embedding space."
],
[
"We found the equations suggested in DBLP:conf/nips/BolukbasiCZSK16 on the opaque side of things. So we provide here proofs missing from the original work ourselves.",
"Proposition 1 The neutralise step of DBLP:conf/nips/BolukbasiCZSK16 yields a unit vector. Specifically, DBLP:conf/nips/BolukbasiCZSK16 define",
"",
"We want to prove that $||\\vec{w}||_2 = 1$",
"where we note that $\\nu = \\mu - \\mu _{\\perp B}$ so it is orthogonal to both $\\vec{w}_B$ and $\\vec{\\mu }_B$.",
"Proposition 2 The equalise step of DBLP:conf/nips/BolukbasiCZSK16 ensures that gendered pairs, e.g. man–woman, are equidistant to all gender-neutral words.",
"The normalized vectors for gendered words are orthogonal to those gender-neutral words by construction. Thus, the distance in both cases is simply $\\nu $."
],
[
"Below are listed the word sets we used for the WEAT to test direct bias, as defined by BIBREF13. Note that for the careers–family test, the target and attribute words have been reversed; that is, gender is captured by the target words, rather than the attribute words. Whilst this distinction is important in the source psychological literature BIBREF12, mathematically the target sets and attribute sets are indistinguishable and fully commutative."
],
[
"$\\text{Target}_X$: math, algebra, geometry, calculus, equations, computation, numbers, addition; $\\text{Target}_Y$: poetry, art, dance, literature, novel, symphony, drama, sculpture; $\\text{Attribute}_A$: male, man, boy, brother, he, him, his, son; $\\text{Attribute}_B$: female, woman, girl, sister, she, her, hers, daughter"
],
[
"$\\text{Target}_X$: science, technology, physics, chemistry, Einstein, NASA, experiment, astronomy; $\\text{Target}_Y$: poetry, art, Shakespeare, dance, literature, novel, symphony, drama; $\\text{Attribute}_A$: brother, father, uncle, grandfather, son, he, his, him; $\\text{Attribute}_B$: sister, mother, aunt, grandmother, daughter, she, hers, her"
],
[
"$\\text{Target}_X$: John, Paul, Mike, Kevin, Steve, Greg, Jeff, Bill; $\\text{Target}_Y$: Amy, Joan, Lisa, Sarah, Diana, Kate, Ann, Donna; $\\text{Attribute}_A$: executive, management, professional, corporation, salary, office, business, career; $\\text{Attribute}_B$: home, parents, children, family, cousins, marriage, wedding, relatives"
],
[
"Additional results for the Annotated English Gigaword are given here."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Word Embedding Debiasing",
"Related Work ::: Counterfactual Data Augmentation",
"Improvements to CDA",
"Improvements to CDA ::: Counterfactual Data Substitution",
"Improvements to CDA ::: The Names Intervention",
"Experimental Setup",
"Experimental Setup ::: Direct bias",
"Experimental Setup ::: Indirect bias",
"Experimental Setup ::: Word similarity",
"Experimental Setup ::: Sentiment classification",
"Experimental Setup ::: Non-biased gender analogies",
"Results ::: Direct bias",
"Results ::: Indirect bias",
"Results ::: Word similarity",
"Results ::: Sentiment classification",
"Results ::: Non-biased gender analogies",
"Conclusion",
"Proofs for method from @!START@BIBREF1@!END@",
"WEAT word sets",
"WEAT word sets ::: Art–Maths",
"WEAT word sets ::: Arts–Sciences",
"WEAT word sets ::: Careers–Family",
"Additional Gigaword results"
]
} | {
"answers": [
{
"annotation_id": [
"2c3fd5a75b89034a9d023fee4445ff2a0e939ab4",
"461cf1c58a1ac8c5dd9e0201fc5373a00b08d109"
],
"answer": [
{
"evidence": [
"To demonstrate indirect gender bias we adapt a pair of methods proposed by BIBREF4. First, we test whether the most-biased words prior to bias mitigation remain clustered following bias mitigation. To do this, we define a new subspace, $\\vec{b}_\\text{test}$, using the 23 word pairs used in the Google Analogy family test subset BIBREF14 following BIBREF1's (BIBREF1) method, and determine the 1000 most biased words in each corpus (the 500 words most similar to $\\vec{b}_\\text{test}$ and $-\\vec{b}_\\text{test}$) in the unmitigated embedding. For each debiased embedding we then project these words into 2D space with tSNE BIBREF15, compute clusters with k-means, and calculate the clusters' V-measure BIBREF16. Low values of cluster purity indicate that biased words are less clustered following bias mitigation."
],
"extractive_spans": [
"V-measure"
],
"free_form_answer": "",
"highlighted_evidence": [
"For each debiased embedding we then project these words into 2D space with tSNE BIBREF15, compute clusters with k-means, and calculate the clusters' V-measure BIBREF16."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To demonstrate indirect gender bias we adapt a pair of methods proposed by BIBREF4. First, we test whether the most-biased words prior to bias mitigation remain clustered following bias mitigation. To do this, we define a new subspace, $\\vec{b}_\\text{test}$, using the 23 word pairs used in the Google Analogy family test subset BIBREF14 following BIBREF1's (BIBREF1) method, and determine the 1000 most biased words in each corpus (the 500 words most similar to $\\vec{b}_\\text{test}$ and $-\\vec{b}_\\text{test}$) in the unmitigated embedding. For each debiased embedding we then project these words into 2D space with tSNE BIBREF15, compute clusters with k-means, and calculate the clusters' V-measure BIBREF16. Low values of cluster purity indicate that biased words are less clustered following bias mitigation."
],
"extractive_spans": [
"V-measure BIBREF16"
],
"free_form_answer": "",
"highlighted_evidence": [
"For each debiased embedding we then project these words into 2D space with tSNE BIBREF15, compute clusters with k-means, and calculate the clusters' V-measure BIBREF16. Low values of cluster purity indicate that biased words are less clustered following bias mitigation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"843ec3936b0f6a19983e14ad7537584fb3cfa60c",
"cc5c9dd31f74b3e990966037d49e05bcdab1a1a9"
],
"answer": [
{
"evidence": [
"We have replicated two state-of-the-art bias mitigation techniques, WED and CDA, on two large corpora, Wikipedia and the English Gigaword. In our empirical comparison, we found that although both methods mitigate direct gender bias and maintain the interpretability of the space, WED failed to maintain a robust representation of gender (the best variants had an error rate of 23% average when drawing non-biased analogies, suggesting that too much gender information was removed). A new variant of CDA we propose (the Names Intervention) is the only to successfully mitigate indirect gender bias: following its application, previously biased words are significantly less clustered according to gender, with an average of 49% reduction in cluster purity when clustering the most biased words. We also proposed Counterfactual Data Substitution, which generally performed better than the CDA equivalents, was notably quicker to compute (as Word2Vec is linear in corpus size), and in theory allows for multiple intervention layers without a corpus becoming exponentially large."
],
"extractive_spans": [
"WED",
"CDA"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have replicated two state-of-the-art bias mitigation techniques, WED and CDA, on two large corpora, Wikipedia and the English Gigaword."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We have replicated two state-of-the-art bias mitigation techniques, WED and CDA, on two large corpora, Wikipedia and the English Gigaword. In our empirical comparison, we found that although both methods mitigate direct gender bias and maintain the interpretability of the space, WED failed to maintain a robust representation of gender (the best variants had an error rate of 23% average when drawing non-biased analogies, suggesting that too much gender information was removed). A new variant of CDA we propose (the Names Intervention) is the only to successfully mitigate indirect gender bias: following its application, previously biased words are significantly less clustered according to gender, with an average of 49% reduction in cluster purity when clustering the most biased words. We also proposed Counterfactual Data Substitution, which generally performed better than the CDA equivalents, was notably quicker to compute (as Word2Vec is linear in corpus size), and in theory allows for multiple intervention layers without a corpus becoming exponentially large."
],
"extractive_spans": [
"WED",
"CDA"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have replicated two state-of-the-art bias mitigation techniques, WED and CDA, on two large corpora, Wikipedia and the English Gigaword."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"03ad48c461f5fe0e0e7a9ca86e62f00ae90a2071",
"5e92a7be3d5dab431b2a619bc4e4132094fa8f86"
],
"answer": [
{
"evidence": [
"We fixedly associate pairs of names for swapping, thus expanding BIBREF5's short list of gender pairs vastly. Clearly both name frequency and the degree of gender-specificity are relevant to this bipartite matching. If only frequency were considered, a more gender-neutral name (e.g. Taylor) could be paired with a very gender-specific name (e.g. John), which would negate the gender intervention in many cases (namely whenever a male occurrence of Taylor is transformed into John, which would also result in incorrect pronouns, if present). If, on the other hand, only the degree of gender-specificity were considered, we would see frequent names (like James) being paired with far less frequent names (like Sybil), which would distort the overall frequency distribution of names. This might also result in the retention of a gender signal: for instance, swapping a highly frequent male name with a rare female name might simply make the rare female name behave as a new link between masculine contexts (instead of the original male name), as it rarely appears in female contexts."
],
"extractive_spans": [
"name frequency",
"the degree of gender-specificity"
],
"free_form_answer": "",
"highlighted_evidence": [
"We fixedly associate pairs of names for swapping, thus expanding BIBREF5's short list of gender pairs vastly. Clearly both name frequency and the degree of gender-specificity are relevant to this bipartite matching."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure FIGREF13 shows a plot of various names' number of primary gender occurances against their secondary gender occurrences, with red dots for primary-male and blue crosses for primary-female names. The problem of finding name-pairs thus decomposes into a Euclidean-distance bipartite matching problem, which can be solved using the Hungarian method BIBREF7. We compute pairs for the most frequent 2500 names of each gender in the SSA dataset. There is also the problem that many names are also common nouns (e.g. Amber, Rose, or Mark), which we solve using Named Entity Recognition."
],
"extractive_spans": [],
"free_form_answer": "By solving the Euclidean-distance bipartite matching problem of names by frequency\nand gender-specificity",
"highlighted_evidence": [
"Figure FIGREF13 shows a plot of various names' number of primary gender occurances against their secondary gender occurrences, with red dots for primary-male and blue crosses for primary-female names. The problem of finding name-pairs thus decomposes into a Euclidean-distance bipartite matching problem, which can be solved using the Hungarian method BIBREF7."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"496e11d6b1b4a51a1c47ff535666ef10bfe70960",
"6cbe230a21885b4d022fb2572ad92d18ae18cf1a"
],
"answer": [
{
"evidence": [
"To improve CDA we make two proposals. The first, Counterfactual Data Substitution (CDS), is designed to avoid text duplication in favour of substitution. The second, the Names Intervention, is a method which can be applied to either CDA or CDS, and treats bias inherent in first names. It does so using a novel name pairing strategy that accounts for both name frequency and gender-specificity. Using our improvements, the clusters of the most biased words exhibit a reduction of cluster purity by an average of 49% across both corpora following treatment, thereby offering a partial solution to the problem of indirect bias as formalised by BIBREF4. [author=simone,color=blue!40,size=,fancyline,caption=,]first part of reaction to reviewer 4Additionally, although one could expect that the debiased embeddings might suffer performance losses in computational linguistic tasks, our embeddings remain useful for at least two such tasks, word similarity and sentiment classification BIBREF6."
],
"extractive_spans": [
"word similarity",
"sentiment classification"
],
"free_form_answer": "",
"highlighted_evidence": [
"Additionally, although one could expect that the debiased embeddings might suffer performance losses in computational linguistic tasks, our embeddings remain useful for at least two such tasks, word similarity and sentiment classification BIBREF6."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In our experiments, we test the degree to which the spaces are successful at mitigating direct and indirect bias, as well as the degree to which they can still be used in two NLP tasks standardly performed with embeddings, word similarity and sentiment classification. We also introduce one further, novel task, which is designed to quantify how well the embedding spaces capture an understanding of gender using non-biased analogies. Our evaluation matrix and methodology is expanded below."
],
"extractive_spans": [
"word similarity",
"sentiment classification",
"understanding of gender using non-biased analogies"
],
"free_form_answer": "",
"highlighted_evidence": [
"In our experiments, we test the degree to which the spaces are successful at mitigating direct and indirect bias, as well as the degree to which they can still be used in two NLP tasks standardly performed with embeddings, word similarity and sentiment classification. We also introduce one further, novel task, which is designed to quantify how well the embedding spaces capture an understanding of gender using non-biased analogies."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2143a9d6f40a2433be7fe37e84d58ab92e81e78a",
"5fc01daa4bf2a95a4fa7ebd72a43868dd02bfbc2"
],
"answer": [
{
"evidence": [
"In our experiments, we test the degree to which the spaces are successful at mitigating direct and indirect bias, as well as the degree to which they can still be used in two NLP tasks standardly performed with embeddings, word similarity and sentiment classification. We also introduce one further, novel task, which is designed to quantify how well the embedding spaces capture an understanding of gender using non-biased analogies. Our evaluation matrix and methodology is expanded below."
],
"extractive_spans": [
"test the degree to which the spaces are successful at mitigating direct and indirect bias, as well as the degree to which they can still be used in two NLP tasks standardly performed with embeddings, word similarity and sentiment classification"
],
"free_form_answer": "",
"highlighted_evidence": [
"In our experiments, we test the degree to which the spaces are successful at mitigating direct and indirect bias, as well as the degree to which they can still be used in two NLP tasks standardly performed with embeddings, word similarity and sentiment classification."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In our experiments, we test the degree to which the spaces are successful at mitigating direct and indirect bias, as well as the degree to which they can still be used in two NLP tasks standardly performed with embeddings, word similarity and sentiment classification. We also introduce one further, novel task, which is designed to quantify how well the embedding spaces capture an understanding of gender using non-biased analogies. Our evaluation matrix and methodology is expanded below.",
"Experimental Setup ::: Direct bias",
"BIBREF0 introduce the Word Embedding Association Test (WEAT), which provides results analogous to earlier psychological work by BIBREF12 by measuring the difference in relative similarity between two sets of target words $X$ and $Y$ and two sets of attribute words $A$ and $B$. We compute Cohen's $d$ (a measure of the difference in relative similarity of the word sets within each embedding; higher is more biased), and a one-sided $p$-value which indicates whether the bias detected by WEAT within each embedding is significant (the best outcome being that no such bias is detectable). We do this for three tests proposed by BIBREF13 which measure the strength of various gender stereotypes: art–maths, arts–sciences, and careers–family.",
"Experimental Setup ::: Indirect bias",
"To demonstrate indirect gender bias we adapt a pair of methods proposed by BIBREF4. First, we test whether the most-biased words prior to bias mitigation remain clustered following bias mitigation. To do this, we define a new subspace, $\\vec{b}_\\text{test}$, using the 23 word pairs used in the Google Analogy family test subset BIBREF14 following BIBREF1's (BIBREF1) method, and determine the 1000 most biased words in each corpus (the 500 words most similar to $\\vec{b}_\\text{test}$ and $-\\vec{b}_\\text{test}$) in the unmitigated embedding. For each debiased embedding we then project these words into 2D space with tSNE BIBREF15, compute clusters with k-means, and calculate the clusters' V-measure BIBREF16. Low values of cluster purity indicate that biased words are less clustered following bias mitigation.",
"Experimental Setup ::: Word similarity",
"The quality of a space is traditionally measured by how well it replicates human judgements of word similarity. The SimLex-999 dataset BIBREF17 provides a ground-truth measure of similarity produced by 500 native English speakers. Similarity scores in an embedding are computed as the cosine angle between word-vector pairs, and Spearman correlation between embedding and human judgements are reported. We measure correlative significance at $\\alpha = 0.01$.",
"Experimental Setup ::: Sentiment classification",
"Following BIBREF6, we use a standard sentiment classification task to quantify the downstream performance of the embedding spaces when they are used as a pretrained word embedding input BIBREF18 to Doc2Vec on the Stanford Large Movie Review dataset. The classification is performed by an SVM classifier using the document embeddings as features, trained on 40,000 labelled reviews and tested on the remaining 10,000 documents, reported as error percentage.",
"Experimental Setup ::: Non-biased gender analogies",
"When proposing WED, BIBREF1 use human raters to class gender-analogies as either biased (woman:housewife :: man:shopkeeper) or appropriate (woman:grandmother :: man::grandfather), and postulate that whilst biased analogies are undesirable, appropriate ones should remain. Our new analogy test uses the 506 analogies in the family analogy subset of the Google Analogy Test set BIBREF14 to define many such appropriate analogies that should hold even in a debiased environment, such as boy:girl :: nephew:niece. We use a proportional pair-based analogy test, which measures each embedding's performance when drawing a fourth word to complete each analogy, and report error percentage."
],
"extractive_spans": [
"Direct bias",
"Indirect bias",
"Word similarity",
"Sentiment classification",
"Non-biased gender analogies"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our evaluation matrix and methodology is expanded below.",
"Direct bias\nBIBREF0 introduce the Word Embedding Association Test (WEAT), which provides results analogous to earlier psychological work by BIBREF12 by measuring the difference in relative similarity between two sets of target words $X$ and $Y$ and two sets of attribute words $A$ and $B$.",
"Indirect bias\nTo demonstrate indirect gender bias we adapt a pair of methods proposed by BIBREF4. First, we test whether the most-biased words prior to bias mitigation remain clustered following bias mitigation.",
"Word similarity\nThe quality of a space is traditionally measured by how well it replicates human judgements of word similarity.",
"Sentiment classification\nFollowing BIBREF6, we use a standard sentiment classification task to quantify the downstream performance of the embedding spaces when they are used as a pretrained word embedding input BIBREF18 to Doc2Vec on the Stanford Large Movie Review dataset.",
"Non-biased gender analogies\nWhen proposing WED, BIBREF1 use human raters to class gender-analogies as either biased (woman:housewife :: man:shopkeeper) or appropriate (woman:grandmother :: man::grandfather), and postulate that whilst biased analogies are undesirable, appropriate ones should remain. Our new analogy test uses the 506 analogies in the family analogy subset of the Google Analogy Test set BIBREF14 to define many such appropriate analogies that should hold even in a debiased environment, such as boy:girl :: nephew:niece."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How is cluster purity measured?",
"What was the previous state of the art for bias mitigation?",
"How are names paired in the Names Intervention?",
"Which tasks quantify embedding quality?",
"What empirical comparison methods are used?"
],
"question_id": [
"130d73400698e2b3c6860b07f2e957e3ff022d48",
"7e9aec2bdf4256c6249cad9887c168d395b35270",
"1acf06105f6c1930f869347ef88160f55cbf382b",
"9ce90f4132b34a328fa49a63e897f376a3ad3ca8",
"3138f916e253abed643d3399aa8a4555b2bd8c0f"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias",
"bias",
"bias",
"bias"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Word sets used by WED with examples",
"Figure 2: Frequency and gender-specificity of names in the SSA dataset",
"Figure 3: Bipartite matching of names by frequency and gender-specificity",
"Figure 4: Variance explained by the top Principal Components of the definitional word pairs (left) and random unit vectors (right)",
"Figure 5: Most biased cluster purity results",
"Table 1: Direct bias results",
"Figure 6: Clustering of biased words (Gigaword)",
"Figure 7: Reclassification of most biased words results",
"Table 2: Word similarity Results",
"Figure 8: Sentiment classification results",
"Figure 9: Non-biased gender analogy results",
"Figure 10: Most biased cluster purity results",
"Figure 11: Reclassification of most biased words results",
"Figure 12: Sentiment classification results",
"Figure 13: Non-biased gender analogy results"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png",
"6-Table1-1.png",
"7-Figure6-1.png",
"7-Figure7-1.png",
"7-Table2-1.png",
"8-Figure8-1.png",
"8-Figure9-1.png",
"10-Figure10-1.png",
"11-Figure11-1.png",
"11-Figure12-1.png",
"11-Figure13-1.png"
]
} | [
"How are names paired in the Names Intervention?"
] | [
[
"1909.00871-Improvements to CDA ::: The Names Intervention-3",
"1909.00871-Improvements to CDA ::: The Names Intervention-2"
]
] | [
"By solving the Euclidean-distance bipartite matching problem of names by frequency\nand gender-specificity"
] | 229 |
1912.03010 | Semantic Mask for Transformer based End-to-End Speech Recognition | Attention-based encoder-decoder model has achieved impressive results for both automatic speech recognition (ASR) and text-to-speech (TTS) tasks. This approach takes advantage of the memorization capacity of neural networks to learn the mapping from the input sequence to the output sequence from scratch, without the assumption of prior knowledge such as the alignments. However, this model is prone to overfitting, especially when the amount of training data is limited. Inspired by SpecAugment and BERT, in this paper, we propose a semantic mask based regularization for training such kind of end-to-end (E2E) model. The idea is to mask the input features corresponding to a particular output token, e.g., a word or a word-piece, in order to encourage the model to fill the token based on the contextual information. While this approach is applicable to the encoder-decoder framework with any type of neural network architecture, we study the transformer-based model for ASR in this work. We perform experiments on Librispeech 960h and TedLium2 data sets, and achieve the state-of-the-art performance on the test set in the scope of E2E models. | {
"paragraphs": [
[
"End-to-end (E2E) acoustic models, particularly with the attention-based encoder-decoder framework BIBREF0, have achieved a competitive recognition accuracy in a wide range of speech datasets BIBREF1. This model directly learns the mapping from the input acoustic signals to the output transcriptions without decomposing the problems into several different modules such as lexicon modeling, acoustic modeling and language modeling as in the conventional hybrid architecture. While this kind of E2E approach significantly simplifies the speech recognition pipeline, the weakness is that it is difficult to tune the strength of each component. One particular problem from our observations is that the attention based E2E model tends to make grammatical errors, which indicates that the language modeling power of the model is weak, possibly due to the small amount of training data, or the mismatch between the training and evaluation data. However, due to the jointly model approach in the attention model, it is unclear how to improve the strength of the language modeling power, i.e., attributing more weights to the previous output tokens in the decoder, or to improve the strength of the acoustic modeling power, i.e., attributing more weights to the context vector from the encoder.",
"While an external language model may be used to mitigate the weakness of the language modeling power of an attention-based E2E model, by either re-scoring the hypothesis or through shallow or deep fusion BIBREF2, the improvements are usually limited, and it incurs additional computational cost. Inspired by SpecAgument BIBREF3 and BERT BIBREF4, we propose a semantic mask approach to improve the strength of the language modeling power in the attention-based E2E model, which, at the same time, improves the generalization capacity of the model as well. Like SpecAugment, this approach masks out partial of the acoustic features during model training. However, instead of using a random mask as in SpecAugment, our approach masks out the whole patch of the features corresponding to an output token during training, e.g., a word or a word-piece. The motivation is to encourage the model to fill in the missing token (or correct the semantic error) based on the contextual information with less acoustic evidence, and consequently, the model may have a stronger language modeling power and is more robust to acoustic distortions.",
"In principle, our approach is applicable to the attention-based E2E framework with any type of neural network encoder. To constrain our research scope, we focus on the transformer architecture BIBREF5, which is originally proposed for neural machine translation. Recently, it has been shown that the transformer model can achieve competitive or even higher recognition accuracy compared with the recurrent neural network (RNN) based E2E model for speech recognition BIBREF6. Compared with RNNs, the transformer model can capture the long-term correlations with a computational complexity of $O(1)$, instead of using many steps of back-propagation through time (BPTT) as in RNNs. We evaluate our transformer model with semantic masking on Librispeech and TedLium datasets. We show that semantic masking can achieve significant word error rate reduction (WER) on top of SpecAugment, and we report the lowest WERs on the test sets of the Librispeech corpus with an E2E model."
],
[
"As aforementioned, our approach is closely related to SpecAugment BIBREF3, which applies a random mask to the acoustic features to regularize an E2E model. However, our masking approach is more structured in the sense that we mask the acoustic signals corresponding to a particular output token. Besides the benefit in terms of model regularization, our approach also encourages the model to reconstruct the missing token based on the contextual information, which improves the power of the implicit language model in the decoder. The masking approach operates as the output token level is also similar to the approach used in BERT BIBREF4, but with the key difference that our approaches works in the acoustic space.",
"In terms of the model structure, the transformer-based E2E model has been investigated for both attention-based framework as well as RNN-T based models BIBREF7. Our model structure generally follows BIBREF8, with a minor difference that we used a deeper CNN before the self-attention blocks. We used a joint CTC/Attention loss to train our model following BIBREF6."
],
[
"Our masking approach requires the alignment information in order to perform the token-wise masking as shown in Figure FIGREF2. There are multiple speech recognition toolkits available to generate such kind of alignments. In this work, we used the Montreal Forced Alignertrained with the training data to perform forced-alignment between the acoustic signals and the transcriptions to obtain the word-level timing information. During model training, we randomly select a percentage of the tokens and mask the corresponding speech segments in each iteration. Following BIBREF4, in our work, we randomly sample 15% of the tokens and set the masked piece to the mean value of the whole utterance.",
"It should be noted that the semantic masking strategy is easy to combine with the previous SpecAugment masking strategy. Therefore, we adopt a time warp, frequency masking and time masking strategy in our masking strategy."
],
[
"Spectrum augmentation BIBREF3 is similar to our method, since both propose to mask spectrum for E2E model training. However, the intuitions behind these two methods are different. SpecAugment randomly masks spectrum in order to add noise to the source input, making the E2E ASR problem harder and prevents the over-fitting problem in a large E2E model.",
"In contrast, our model aims to force the decoder to learn a better language model. Suppose that if a few words' speech features are masked, the E2E model has to predict the token based on other signals, such as tokens that have generated or other unmasked speech features. In this way, we might alleviate the over-fitting issue that generating words only considering its corresponding speech features while ignoring other useful features. We believe our model is more effective when the input is noisy, because a model may generate correct tokens without considering previous generated tokens in a noise-free setting but it has to consider other signals when inputs are noisy, which is confirmed in our experiment."
],
[
"Following BIBREF8, we add convolution layers before Transformer blocks and discard the widely used positional encoding component. According to our preliminary experiments, the convolution layers slightly improve the performance of the E2E model. In the following, we will describe the CNN layers and Transformer block respectively."
],
[
"We represent input signals as a sequence of log-Mel filter bank features, denoted as $\\mathbf {X}=(x_0 \\ldots , x_n)$, where $x_i$ is a 83-dim vector. Since the length of spectrum is much longer than text, we use VGG-like convolution block BIBREF9 with layer normalization and max-pooling function. The specific architecture is shown in Figure FIGREF6 . We hope the convolution block is able to learn local relationships within a small context and relative positional information. According to our experiments, the specific architecture outperforms Convolutional 2D subsampling method BIBREF6. We also use 1D-CNN in the decoder to extract local features replacing the position embedding ."
],
[
"Our Transformer architecture is implemented as BIBREF6, depicting in Figure FIGREF15. The transformer module consumes the outputs of CNN and extracts features with a self-attention mechanism. Suppose that $Q$, $K$ and $V$ are inputs of a transformer block, its outputs are calculated by the following equation",
"where $d_k$ is the dimension of the feature vector. To enable dealing with multiple attentions, multi-head attention is proposed, which is formulated as",
"where $d_{head}$ is the number of attention heads. Moreover, residual connection BIBREF10, feed-forward layer and layer normalization BIBREF11 are indispensable parts in Transformer, and their combinations are shown in Figure FIGREF15."
],
[
"Following previous work BIBREF6, we employ a multi-task learning strategy to train the E2E model. Formally speaking, both the E2E model decoder and the CTC module predict the frame-wise distribution of $Y$ given corresponding source $X$, denoted as $P_{s2s}(\\mathbf {Y}|\\mathbf {X})$ and $P_{ctc}(\\mathbf {Y}|\\mathbf {X})$. We weighted averaged two negative log likelihoods to train our model",
"where $\\alpha $ is set to 0.7 in our experiment.",
"We combine scores of E2E model $P_{s2s}$, CTC score $P_{ctc}$ and a RNN based language model $P_{rnn}$ in the decoding process, which is formulated as",
"where $\\beta _1$ and $\\beta _2$ are tuned on the development set. Following BIBREF12, we rescore our beam outputs based on another transformer based language model $P_{trans\\_lm}(\\mathbf {Y})$ and the sentence length penalty $\\text{Wordcount}(\\mathbf {Y})$.",
"where $P_{trans\\_lm}$ denotes the sentence generative probability given by a Transformer language model."
],
[
"In this section, we describe our experiments on LibriSpeech BIBREF1 and TedLium2 BIBREF13. We compare our results with state-of-the-art hybrid and E2E systems. We implemented our approach based on ESPnet BIBREF6, and the specific settings on two datasets are the same with BIBREF6, except the decoding setting. We use the beam size 20, $\\beta _1 = 0.5$, and $\\beta _2=0.7$ in our experiment."
],
[
"We represent input signals as a sequence of 80-dim log-Mel filter bank with 3-dim pitch features BIBREF17. SentencePiece is employed as the tokenizer, and the vocabulary size is 5000. The hyper-parameters in Transformer and SpecAugment follow BIBREF6 for a fair comparison. We use Adam algorithm to update the model, and the warmup step is 25000. The learning rate decreases proportionally to the inverse square root of the step number after the 25000-th step. We train our model 100 epochs on 4 P40 GPUs, which approximately costs 5 days to coverage. We also apply speed perturbation by changing the audio speed to 0.9, 1.0 and 1.1. Following BIBREF6, we average the last 5 checkpoints as the final model. Unlike BIBREF14 and BIBREF15, we use the same checkpoint for test-clean and test-other dataset.",
"The RNN language model uses the released LSTM language model provided by ESPnet. The Transformer language model for rescoring is trained on LibriSpeech language model corpus with the GPT-2 base setting (308M parameters). We use the code of NVIDIA Megatron-LM to train the Transformer language model.",
"We evaluate our model in different settings. The baseline Transformer represents the model with position embedding. The comparison between baseline Transformer and our architecture (Model with SpecAugment) indicates the improvements attributed to the architecture. Model with semantic mask is we use the semantic mask strategy on top of SpecAugment, which outperforms Model with SpecAugment with a large margin in a no external language model fusion setting, demonstrating that our masking strategy helps the E2E model to learn a better language model. The gap becomes smaller when equipped with a language model fusion component, which further confirms our motivation in Section SECREF1. Speed Perturbation does not help model performance on the clean dataset, but it is effective on the test-other dataset. Rescore is beneficial to both test-clean and test-other datasets.",
"As far as we know, our model is the best E2E ASR system on the Librispeech testset, which achieves a comparable result with wav2letter Transformer on test-clean dataset and a better result on test-other dataset, even though our model (75M parameters) is much smaller than the wav2letter Transformer (210M parameters). The reason might be that our semantic masking is more suitable on a noisy setting, because the input features are not reliable and the model has to predict the next token relying on previous ones and the whole context of the input. Our model is built upon the code base of ESPnet, and achieves relative $10\\%$ gains due to the better architecture and masking strategy. Comparing with hybrid methods, our model obtains a similar performance on the test-clean set, but is still worse than the best hybrid model on the test-other dataset.",
"We also analyze the performance of different masking strategies, showing in Table TABREF20, where all models are shallow fused with the RNN language model. The SpecAugment provides 30$\\%$ relative gains on test-clean and other datasets. According to the comparison between the second line and the third line, we find that the word masking is more effective on test-other dataset. The last row indicates word mask is complementary to random mask on the time axis."
],
[
"To verify the generalization of the semantic mask, we further conduct experiments on TedLium2 BIBREF18 dataset, which is extracted from TED talks. The corpus consists of 207 hours of speech data accompanying 90k transcripts. For a fair comparison, we use the same data-preprocessing method, Transformer architecture and hyperparameter settings as in BIBREF6. Our acoustic features are 80-dim log-Mel filter bank and 3-dim pitch features, which is normalized by the mean and the standard deviation for training set. The utterances with more than 3000 frames or more than 400 characters are discarded. The vocabulary size is set to 1000.",
"The experiment results are listed in Table TABREF21, showing a similar trend as the results in Librispeech dataset. Semantic mask is complementary to specagumentation, which enables better S2S language modeling training in an E2E model, resulting in a relative 4.5$\\%$ gain. The experiment proves the effectiveness of semantic mask on a different and smaller dataset."
],
[
"This paper presents a semantic mask method for E2E speech recognition, which is able to train a model to better consider the whole audio context for the disambiguation. Moreover, we elaborate a new architecture for E2E model, achieving state-of-the-art performance on the Librispeech test set in the scope of E2E models."
]
],
"section_name": [
"Introduction",
"Related Work",
"Semantic Masking ::: Masking Strategy",
"Semantic Masking ::: Why Semantic Mask Works?",
"Model",
"Model ::: CNN Layer",
"Model ::: Transformer Block",
"Model ::: ASR Training and Decoding",
"EXPERIMENT",
"EXPERIMENT ::: Librispeech 960h",
"EXPERIMENT ::: TedLium2",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"03cf4a5540618b63a642d06b88d9a4d90996f8bd",
"5ff180d558ac345a069791a01ddbad17df1170e1"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"While an external language model may be used to mitigate the weakness of the language modeling power of an attention-based E2E model, by either re-scoring the hypothesis or through shallow or deep fusion BIBREF2, the improvements are usually limited, and it incurs additional computational cost. Inspired by SpecAgument BIBREF3 and BERT BIBREF4, we propose a semantic mask approach to improve the strength of the language modeling power in the attention-based E2E model, which, at the same time, improves the generalization capacity of the model as well. Like SpecAugment, this approach masks out partial of the acoustic features during model training. However, instead of using a random mask as in SpecAugment, our approach masks out the whole patch of the features corresponding to an output token during training, e.g., a word or a word-piece. The motivation is to encourage the model to fill in the missing token (or correct the semantic error) based on the contextual information with less acoustic evidence, and consequently, the model may have a stronger language modeling power and is more robust to acoustic distortions."
],
"extractive_spans": [
"a word or a word-piece"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, instead of using a random mask as in SpecAugment, our approach masks out the whole patch of the features corresponding to an output token during training, e.g., a word or a word-piece. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0530b496bd2590b14d6c4586b3541a16d461244a",
"d178d217fb1bb3245ee3be07d1aee066012c90e8"
],
"answer": [
{
"evidence": [
"The experiment results are listed in Table TABREF21, showing a similar trend as the results in Librispeech dataset. Semantic mask is complementary to specagumentation, which enables better S2S language modeling training in an E2E model, resulting in a relative 4.5$\\%$ gain. The experiment proves the effectiveness of semantic mask on a different and smaller dataset.",
"As far as we know, our model is the best E2E ASR system on the Librispeech testset, which achieves a comparable result with wav2letter Transformer on test-clean dataset and a better result on test-other dataset, even though our model (75M parameters) is much smaller than the wav2letter Transformer (210M parameters). The reason might be that our semantic masking is more suitable on a noisy setting, because the input features are not reliable and the model has to predict the next token relying on previous ones and the whole context of the input. Our model is built upon the code base of ESPnet, and achieves relative $10\\%$ gains due to the better architecture and masking strategy. Comparing with hybrid methods, our model obtains a similar performance on the test-clean set, but is still worse than the best hybrid model on the test-other dataset."
],
"extractive_spans": [
"relative 4.5$\\%$ gain",
"built upon the code base of ESPnet, and achieves relative $10\\%$ gains due to the better architecture and masking strategy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Semantic mask is complementary to specagumentation, which enables better S2S language modeling training in an E2E model, resulting in a relative 4.5$\\%$ gain.",
"Our model is built upon the code base of ESPnet, and achieves relative $10\\%$ gains due to the better architecture and masking strategy.",
"Our model is built upon the code base of ESPnet, and achieves relative $10\\%$ gains due to the better architecture and masking strategy. Comparing with hybrid methods, our model obtains a similar performance on the test-clean set, but is still worse than the best hybrid model on the test-other dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As far as we know, our model is the best E2E ASR system on the Librispeech testset, which achieves a comparable result with wav2letter Transformer on test-clean dataset and a better result on test-other dataset, even though our model (75M parameters) is much smaller than the wav2letter Transformer (210M parameters). The reason might be that our semantic masking is more suitable on a noisy setting, because the input features are not reliable and the model has to predict the next token relying on previous ones and the whole context of the input. Our model is built upon the code base of ESPnet, and achieves relative $10\\%$ gains due to the better architecture and masking strategy. Comparing with hybrid methods, our model obtains a similar performance on the test-clean set, but is still worse than the best hybrid model on the test-other dataset."
],
"extractive_spans": [],
"free_form_answer": "10%",
"highlighted_evidence": [
"As far as we know, our model is the best E2E ASR system on the Librispeech testset, which achieves a comparable result with wav2letter Transformer on test-clean dataset and a better result on test-other dataset, even though our model (75M parameters) is much smaller than the wav2letter Transformer (210M parameters).",
"Our model is built upon the code base of ESPnet, and achieves relative $10\\%$ gains due to the better architecture and masking strategy."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"no",
"no"
],
"question": [
"How do they define their tokens (words, word-piece)?",
"By how much do they outperform existing state-of-the-art model on end-to-end Speech recognition?s "
],
"question_id": [
"810e6d09813486a64e87ef6c1fb9b1e205871632",
"ab8b0e6912a7ca22cf39afdac5531371cda66514"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. An example of semantic mask",
"Fig. 2. CNN layer architecture.",
"Fig. 3. E2E ASR model architecture.",
"Table 1. Comparison of the Librispeech ASR benchmark",
"Table 2. Ablation test of different masking methods. The second line is a default setting of SpecAugment. The third line uses word mask to replace random time mask, and the last line combines both methods on the time axis.",
"Table 3. Experiment results on TEDLIUM2."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"By how much do they outperform existing state-of-the-art model on end-to-end Speech recognition?s "
] | [
[
"1912.03010-EXPERIMENT ::: TedLium2-1",
"1912.03010-EXPERIMENT ::: Librispeech 960h-3"
]
] | [
"10%"
] | 230 |
1805.09960 | Phrase Table as Recommendation Memory for Neural Machine Translation | Neural Machine Translation (NMT) has drawn much attention due to its promising translation performance recently. However, several studies indicate that NMT often generates fluent but unfaithful translations. In this paper, we propose a method to alleviate this problem by using a phrase table as recommendation memory. The main idea is to add bonus to words worthy of recommendation, so that NMT can make correct predictions. Specifically, we first derive a prefix tree to accommodate all the candidate target phrases by searching the phrase translation table according to the source sentence. Then, we construct a recommendation word set by matching between candidate target phrases and previously translated target words by NMT. After that, we determine the specific bonus value for each recommendable word by using the attention vector and phrase translation probability. Finally, we integrate this bonus value into NMT to improve the translation results. The extensive experiments demonstrate that the proposed methods obtain remarkable improvements over the strong attentionbased NMT. | {
"paragraphs": [
[
"The past several years have witnessed a significant progress in Neural Machine Translation (NMT). Most NMT methods are based on the encoder-decoder architecture BIBREF0 , BIBREF1 , BIBREF2 and can achieve promising translation performance in a variety of language pairs BIBREF3 , BIBREF4 , BIBREF5 .",
"However, recent studies BIBREF6 , BIBREF7 show that NMT often generates words that make target sentences fluent, but unfaithful to the source sentences. In contrast, traditional Statistical Machine Translation (SMT) methods tend to rarely make this kind of mistakes. Fig. 1 shows an example that NMT makes mistakes when translating the phrase “jinkou dafu xiahua (the sharp decline in imports)” and the phrase “maoyi shuncha (the trade surplus)”, but SMT can produce correct results when translating these two phrases. BIBREF6 argues that the reason behind this is the use of distributed representations of words in NMT makes systems often generate words that seem natural in the context, but do not reflect the content of the source sentence. Traditional SMT can avoid this problem as it produces the translations based on phrase mappings.",
"Therefore, it will be beneficial to combine SMT and NMT to alleviate the previously mentioned problem. Actually, researchers have made some effective attempts to achieve this goal. Earlier studies were based on the SMT framework, and have been deeply discussed in BIBREF8 . Later, the researchers transfers to NMT framework. Specifically, coverage mechanism BIBREF9 , BIBREF10 , SMT features BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 and translation lexicons BIBREF6 , BIBREF16 , BIBREF17 have been fully explored. In contrast, phrase translation table, as the core of SMT, has not been fully studied. Recently, BIBREF18 and BIBREF19 explore the possibility of translating phrases in NMT. However, the “phrase” in their approaches are different from that used in phrase-based SMT. In BIBREF18 's models, the phrase pair must be a one-to-one mapping with a source phrase having a unique target phrase (named entity translation pairs). In BIBREF19 's models, the source side of a phrase pair must be a chunk. Therefore, it is still a big challenge to incorporate any phrase pair in the phrase table into NMT system to alleviate the unfaithfulness problem.",
"In this paper, we propose an effective method to incorporate a phrase table as recommendation memory into the NMT system. To achieve this, we add bonuses to the words in recommendation set to help NMT make better predictions. Generally, our method contains three steps. 1) In order to find out which words are worthy to recommend, we first derive a candidate target phrase set by searching the phrase table according to the input sentence. After that, we construct a recommendation word set at each decoding step by matching between candidate target phrases and previously translated target words by NMT. 2) We then determine the specific bonus value for each recommendable word by using the attention vector produced by NMT and phrase translation probability extracted from phrase table. 3) Finally we integrate the word bonus value into the NMT system to improve the final results.",
"In this paper, we make the following contributions:",
"1) We propose a method to incorporate the phrase table as recommendation memory into NMT system. We design a novel approach to find from the phrase table the target words worthy of recommendation, calculate their recommendation scores and use them to promote NMT to make better predictions.",
"2) Our empirical experiments on Chinese-English translation and English-Japanese translation tasks show the efficacy of our methods. For Chinese-English translation, we can obtain an average improvement of 2.23 BLEU points. For English-Japanese translation, the improvement can reach 1.96 BLEU points. We further find that the phrase table is much more beneficial than bilingual lexicons to NMT."
],
[
"NMT contains two parts, encoder and decoder, where encoder transforms the source sentence INLINEFORM0 into context vectors INLINEFORM1 . This context set is constructed by INLINEFORM2 stacked Long Short Term Memory (LSTM) BIBREF20 layers. INLINEFORM3 can be calculated as follows: DISPLAYFORM0 ",
"The decoder generates one target word at a time by computing the probability of INLINEFORM0 as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the score produced by NMT: DISPLAYFORM0 ",
"and INLINEFORM0 is the attention output: DISPLAYFORM0 ",
"the attention model calculates INLINEFORM0 as the weighted sum of the source-side context vectors: DISPLAYFORM0 DISPLAYFORM1 ",
" INLINEFORM0 is computed using the following formula: DISPLAYFORM0 "
],
[
"In section 2 we described how the standard NMT models calculate the probability of the next target word (Eq. (2)). Our goal in this paper is to improve the accuracy of this probability estimation by incorporating information from phrase tables. Our main idea is to find the recommendable words and increase their probabilities at each decoding time step. Thus, three questions arise:",
"1) Which words are worthy to recommend at each decoding step?",
"2) How to determine an appropriate bonus value for each recommendable word?",
"3) How to integrate the bonus value into NMT?",
"In this section, we will describe the specific methods to answer above three questions. As the basis of our work, we first introduce two definitions used by our methods.",
"Definition 1 (prefix of phrase): the prefix of a phrase is a word sequence which begins with the first word of the phrase and ends with any word of the phrase. Note that the prefix string can be empty. For a phrase INLINEFORM0 , this phrase contains four prefixes: INLINEFORM1 .",
"Definition 2 (suffix of partial translation): the suffix of the partial translation INLINEFORM0 is a word sequence, which begins with any word belonging to INLINEFORM1 , and ends with INLINEFORM2 . Similarly, the suffix string can also be empty. For partial translation INLINEFORM3 , there are four suffixes INLINEFORM4 ."
],
[
"The first step is to derive a candidate target phrase set for a source sentence. The recommendation words are selected from this set.",
"Given a source sentence INLINEFORM0 and a phrase translation table (as shown in upper right of Fig. 2), we can traverse the phrase translation table and get all the phrase pairs whose source side matches the input source sentence. Then, for each phrase pair, we add the target phrases with the top INLINEFORM1 highest phrase translation probabilities into the candidate target phrase set.",
"In order to improve efficiency of the next step, we represent this candidate target phrase set in a form of prefix tree. If the phrases contain the same prefix (Definition 1), the prefix tree can merge them and represent them using the same non-terminal nodes. The root of this prefix tree is an empty node. Fig. 2 shows an example to illustrate how we get the candidate target phrase set for a source sentence. In this example, In phrase table (upper right), we find four phrases whose source side matches the source sentence (upper left). We add the target phrases into candidate target phrase set (middle). Finally, we use a prefix tree (bottom) to represent the candidate target phrases.",
"With above preparations, we can start to construct the word recommendation set. In our method, we need to construct a word recommendation set INLINEFORM0 at each decoding step INLINEFORM1 . The basic idea is that if a prefix INLINEFORM2 (Definition 1) of a phrase in candidate target phrase set matches a suffix INLINEFORM3 (Definition 2) of the partial translation INLINEFORM4 , the next word of INLINEFORM5 in the phrase may be the next target word INLINEFORM6 to be predicted and thus is worthy to recommend.",
"Here, we take Fig. 2 as an example to illustrate our idea. We assume that the partial translation is “he settled in the US, and lived in the suburb of”. According to our definition, this partial translation contains a suffix “suburb of”. Meanwhile, in candidate target phrase set, there is a phrase (“suburb of Milwaukee”) whose two-word prefix is “suburb of” as well. We can notice that the next word of the prefix (\"Milwaukee\") is exactly the one that should be predicted by the decoder. Thus, we recommend “Milwaukee” by adding a bonus to it with the hope that when this low-frequency word is mistranslated by NMT, our recommendation can fix this mistake.",
"Under this assumption, the procedure of constructing the word recommendation set INLINEFORM0 is illustrated in Algorithm 1. We first get all suffixes of INLINEFORM1 (line 2) and all prefixes of target phrases belonging to candidate target phrase set (line 3). If a prefix of the candidate phrase matches a suffix of INLINEFORM2 , we add the next word of the prefix in the phrase into recommendation set INLINEFORM3 (line 4-7).",
"In the definition of the prefix and suffix, we also allow them to be an empty string. By doing so, we can add the first word of each phrase into the word recommendation set, since the suffix of INLINEFORM0 and the prefix of any target phrase always contain a match part INLINEFORM1 . The reason we add the first word of the phrase into recommendation set is that we hope our methods can still recommend some possible words when NMT has finished the translation of one phrase and begins to translate another new one, or predicts the first target word of the whole sentence.",
"[t] Construct recommendation word set Input: candidate target phrase set; already generated partial translation INLINEFORM0 ",
" Output: word recommendation set INLINEFORM0 [1] INLINEFORM1 Get all suffixes of INLINEFORM2 (denote each suffix by INLINEFORM3 ) Get all prefixes of each target phrase in candidate target phrase set (denote every prefix by INLINEFORM4 ) each suffix INLINEFORM5 and each prefix INLINEFORM6 INLINEFORM7 Add the next word of INLINEFORM8 into INLINEFORM9 INLINEFORM10 ",
"Now we already know which word is worthy to recommend. In order to facilitate the calculation of the bonus value (section 3.2), we also need to maintain the origin of each recommendation word. Here, the origin of a recommendation word contains two parts: 1) the phrase pair this word belongs to and 2) the phrase translation probability between the source and target phrases. Formally, for a recommendation word INLINEFORM0 , we can denote it by: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the INLINEFORM1 -th phrase pair the recommendation word INLINEFORM2 belongs to (some words may belong to different phrase pairs and INLINEFORM3 denotes the number of phrase pairs). INLINEFORM4 is the source phrase and INLINEFORM5 is the target phrase. INLINEFORM6 is the phrase translation probability between the source and target phrases. Take Fig. 2 as an example. When the partial translation is “he”, word “settled” can be recommended according to algorithm 1. Word “settled” is contained in two phrase pairs and the translation probabilities are respectively 0.6 and 0.4. Thus, we can denote the word \"settled\" as follows: DISPLAYFORM0 "
],
[
"The next task is to calculate the bonus value for each recommendation word. For a recommendation word INLINEFORM0 denoted by Eq. (8), its bonus value is calculated as follows:",
"Step1: Extracting each phrase translation probability INLINEFORM0 .",
"Step2: For each phrase pair INLINEFORM0 , we convert the attention weight INLINEFORM1 in NMT (Eq. (6)) between target word INLINEFORM2 and source word INLINEFORM3 to phrase alignment probability INLINEFORM4 between target word INLINEFORM5 and source phrase INLINEFORM6 as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the number of words in phrase INLINEFORM1 . As shown in Eq. (10), our conversion method is making an average of word alignment probability INLINEFORM2 whose source word INLINEFORM3 belongs to source phrase INLINEFORM4 .",
"Step3: Calculating the bonus value for each recommendation word as follows: DISPLAYFORM0 ",
"From Eq. (11), the bonus value is determined by two factors, i.e., 1) alignment information INLINEFORM0 and 2) translation probability INLINEFORM1 . The process of involving INLINEFORM2 is important because the bonus value will be influenced by different source phrases that systems focus on. And we take INLINEFORM3 into consideration with a hope that the larger INLINEFORM4 is, the larger its bonus value is."
],
[
"The last step is to combine the bonus value with the conditional probability of the baseline NMT model (Eq.(2)). Specifically, we add the bonuses to the words on the basis of original NMT score (Eq. (3)) as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is calculated by Eq. (11). INLINEFORM1 is the bonus weight, and specifically, it is the result of sigmoid function ( INLINEFORM2 ), where INLINEFORM3 is a learnable parameter, and this sigmoid function ensures that the final weight falls between 0 and 1."
],
[
"In this section, we describe the experiments to evaluate our proposed methods."
],
[
"We test the proposed methods on Chinese-to-English (CH-EN) translation and English-to-Japanese (EN-JA) translation. In CH-EN translation, we test the proposed methods with two data sets: 1) small data set, which includes 0.63M sentence pairs; 2) large-scale data set, which contains about 2.1M sentence pairs. NIST 2003 (MT03) dataset is used for validation. NIST2004-2006 (MT04-06) and NIST 2008 (MT08) datasets are used for testing. In EN-JA translation, we use KFTT dataset, which includes 0.44M sentence pairs for training, 1166 sentence pairs for validation and 1160 sentence pairs for testing."
],
[
"We use the Zoph_RNN toolkit to implement all our described methods. In all experiments, the encoder and decoder include two stacked LSTM layers. The word embedding dimension and the size of hidden layers are both set to 1,000. The minibatch size is set to 128. We limit the vocabulary to 30K most frequent words for both the source and target languages. Other words are replaced by a special symbol “UNK”. At test time, we employ beam search and beam size is set to 12. We use case-insensitive 4-gram BLEU score as the automatic metric BIBREF21 for translation quality evaluation."
],
[
"Our phrase translation table is learned directly from parallel data by Moses BIBREF22 . To ensure the quality of the phrase pair, in all experiments, the phrase translation table is filtered as follows: 1) out-of-vocabulary words in the phrase table are replaced by UNK; 2) we remove the phrase pairs whose words are all punctuations and UNK; 3) for a source phrase, we retain at most 10 target phrases having the highest phrase translation probabilities."
],
[
"We compare our method with other relevant methods as follows:",
"1) Moses: It is a widely used phrasal SMT system BIBREF22 .",
"2) Baseline: It is the baseline attention-based NMT system BIBREF23 , BIBREF24 .",
"3) Arthur: It is the state-of-the-art method which incorporates discrete translation lexicons into NMT model BIBREF6 . We choose automatically learned lexicons and bias method. We implement the method on the base of the baseline attention-based NMT system. Hyper parameter INLINEFORM0 is 0.001, the same as that reported in their work."
],
[
"Table 1 reports the detailed translation results for different methods. Comparing the first two rows in Table 1, it is very obvious that the attention-based NMT system Baseline substantially outperforms the phrase-based SMT system Moses on both CH-EN translation and EN-JA translation. The average improvement for CH-EN and EN-JA translation is up to 3.99 BLEU points (32.71 vs. 28.72) and 3.59 BLEU (25.99 vs. 22.40) points, respectively."
],
[
"The first question we are interested in is whether or not phrase translation table can improve the translation quality of NMT. Compared to the baseline, our method markedly improves the translation quality on both CH-EN translation and EN-JA translation. In CH-EN translation, the average improvement is up to 2.23 BLEU points (34.94 vs. 32.71). In EN-JA translation, the improvement can reach 1.96 BLEU points (27.95 vs. 25.99). It indicates that incorporating a phrase table into NMT can substantially improve NMT's translation quality.",
"In Fig. 3, we show an illustrative example of CH-EN translation. In this example, our method is able to obtain a correct translation while the baseline is not. Specifically, baseline NMT system mistranslates “jinkou dafu xiahua (the sharp decline in imports)” into “import of imports”, and incorrectly translates “maoyi shuncha (trade surplus)” into “trade”. But these two mistakes are fixed by our method, because there are two phrase translation pairs (“jinkou dafu xiahua” to “the sharp decline in imports” and “maoyi shuncha” to “trade surplus”) in the phrase table, and the correct translations are obtained due to our recommendation method."
],
[
"A natural question arises that whether it is more beneficial to incorporate a phrase translation table than the translation lexicons. From Table 1, we can conclude that both translation lexicons and phrase translation table can improve NMT system's translation quality. In CH-EN translation, Arthur improves the baseline NMT system with 0.81 BLEU points, while our method improves the baseline NMT system with 2.23 BLEU points. In EN-JA translation, Arthur improves the baseline NMT system with 0.73 BLEU points, while our method improves the baseline NMT system with 1.96 BLEU points. Therefore, it is very obvious that phrase information is more effective than lexicon information when we use them to improve the NMT system.",
"Fig. 4 shows an illustrative example. In this example, baseline NMT mistranslates “dianli (electricity) anquan (safe)” into “coal”. Arthur partially fixes this error and it can correctly translate “dianli (electrical)” into “electrical”, but the source word “anquan (safe)” is still missed. Fortunately, this mistake is fixed by our proposed method. The reason behind this is that Arthur uses information from translation lexicons, which makes the system only fix the translation mistake of an individual lexicon (in this example, it is “dianli (electrical)”), while our method uses the information from phrases, which makes the system can not only obtain the correct translation of the individual lexicon but also capture local lexicon reordering and fixed collocation etc.",
"Besides the BLEU score, we also conduct a subjective evaluation to validate the benefit of incorporating a phrase table in NMT. The subjective evaluation is conducted on CH-EN translation. As our method tries to solve the problem that NMT system cannot reflect the true meaning of the source sentence, the criterion of the subjective evaluation is the faithfulness of translation results. Specifically, five human evaluators, who are native Chinese and expert in English, are asked to evaluate the translations of 500 source sentences randomly sampled from the test sets without knowing which system a translation is selected from. The score ranges from 0 to 5. For a translation result, the higher its score is, the more faithful it is. Table 2 shows the average results of five subjective evaluations on CH-EN translation. As shown in Table 2,the faithfulness of translation results produced by our method is better than Arthur and baseline NMT system."
],
[
"When constructing the word recommendation set, our current methods are adding the next word of the match part into recommendation set. In order to test the validity of this strategy, we compare the current strategy with another system, in which, we can add all words in candidate target phrase set into recommendation set without matching. We denote this system by system(no matching), whose results are reported in line 5 in Table 1. From the results, we can conclude that in both CH-EN translation and EN-JA translation, system(no matching) can boost the baseline system, while the improvements are much smaller than our methods. It indicates that the matching between the phrase and partial translation is quite necessary for our methods.",
"As we discussed in Section 3.1, we allow the prefix and suffix to be an empty string to make first word of each phrase into the word recommendation set. To show effectiveness of this setting, we also implement another system as a comparison. In the system, the first words of each phrase are not included in the recommendation set (we denote the system by system(no first)). The results of this system are reported in line 6 in Table 1. As shown in Table 1, our methods performs better than system(no first)) on both CH-EN translation and EN-JA translation. This result shows that the first word of the target phrase is also important for our method and is worthy to recommend."
],
[
"We also conduct another experiment to find out whether or not our methods are still effective when much more sentence pairs are available. Therefore, the CH-EN experiments on millions of sentence pairs are conducted and Table 3 reports the results. We can conclude from Table 3 that our model can also improve the NMT translation quality on all of the test sets and the average improvement is up to 1.83 BLEU points."
],
[
"In this work, we focus on integrating the phrase translation table of SMT into NMT. And there have been several effective works to combine SMT and NMT.",
"Using coverage mechanism. BIBREF9 and BIBREF10 improved the over-translation and under-translation problems in NMT inspired by the coverage mechanism in SMT.",
"Extending beam search. BIBREF25 extended the beam search method with SMT hypotheses. BIBREF13 improved the beam search by using the SMT lattices.",
"Combining SMT features and results. BIBREF12 presented a log-linear model to integrate SMT features (translation model and the language model) into NMT. BIBREF26 and BIBREF27 proposed a supervised attention model for NMT to minimize the alignment disagreement between NMT and SMT. BIBREF11 proposed a method that incorporates the translations of SMT into NMT with an auxiliary classifier and a gating function. BIBREF28 proposed a neural combination model to fuse the NMT translation results and SMT translation results.",
"Incorporating translation lexicons. BIBREF6 , BIBREF17 attempted to integrate NMT with the probabilistic translation lexicons. BIBREF16 moved forward further by incorporating a bilingual dictionaries in NMT.",
"In above works, integrating the phrase translation table of SMT into NMT has not been fully studied.",
"Translating phrase in NMT. The most related works are BIBREF18 and BIBREF19 . Both methods attempted to explore the possibility of translating phrases as a whole in NMT. In their models, NMT can generate a target phrase in phrase memory or a word in vocabulary by using a gate. However, their “phrases” are different from that are used in phrase-based SMT. BIBREF18 's models only support a unique translation for a source phrase. In BIBREF19 's models, the source side of a phrase pair must be a chunk. Different from above two methods, our model can use any phrase pair in the phrase translation table and promising results can be achieved."
],
[
"In this paper, we have proposed a method to incorporate a phrase translation table as recommendation memory into NMT systems to alleviate the problem that the NMT system is opt to generate fluent but unfaithful translations.",
"Given a source sentence and a phrase translation table, we first construct a word recommendation set at each decoding step by using a matching method. Then we calculate a bonus value for each recommendable word. Finally we integrate the bonus value into NMT. The extensive experiments show that our method achieved substantial increases in both Chinese-English and English-Japanese translation tasks.",
"In the future, we plan to design more effective methods to calculate accurate bonus values."
],
[
"The research work described in this paper has been supported by the National Key Research and Development Program of China under Grant No. 2016QY02D0303 and the Natural Science Foundation of China under Grant No. 61333018 and 61673380. The research work in this paper also has been supported by Beijing Advanced Innovation Center for Language Resources."
]
],
"section_name": [
"Introduction",
"Neural Machine Translation",
"Phrase Table as Recommendation Memory for NMT",
"Word Recommendation Set",
"Bonus Value Calculation",
"Integrating Bonus Values into NMT",
"Experimental Settings",
"Dataset",
"Training and Evaluation Details",
"Phrase Translation Table",
"Translation Methods",
"Translation Results",
"Effect of Integrating Phrase Translation Table",
"Lexicon vs. Phrase",
"Different Methods to Construct Recommendation Set",
"Translation Results on Large Data",
"Related Work",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"7ce30a3562a3a56c5d77de5631deba7224807f8b",
"a4e137c735237c38d8e1b9d86f7e621a2ea8d9fa"
],
"answer": [
{
"evidence": [
"We use the Zoph_RNN toolkit to implement all our described methods. In all experiments, the encoder and decoder include two stacked LSTM layers. The word embedding dimension and the size of hidden layers are both set to 1,000. The minibatch size is set to 128. We limit the vocabulary to 30K most frequent words for both the source and target languages. Other words are replaced by a special symbol “UNK”. At test time, we employ beam search and beam size is set to 12. We use case-insensitive 4-gram BLEU score as the automatic metric BIBREF21 for translation quality evaluation."
],
"extractive_spans": [
"BLEU "
],
"free_form_answer": "",
"highlighted_evidence": [
". We use case-insensitive 4-gram BLEU score as the automatic metric BIBREF21 for translation quality evaluation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"2) Our empirical experiments on Chinese-English translation and English-Japanese translation tasks show the efficacy of our methods. For Chinese-English translation, we can obtain an average improvement of 2.23 BLEU points. For English-Japanese translation, the improvement can reach 1.96 BLEU points. We further find that the phrase table is much more beneficial than bilingual lexicons to NMT."
],
"extractive_spans": [
"BLEU"
],
"free_form_answer": "",
"highlighted_evidence": [
"For Chinese-English translation, we can obtain an average improvement of 2.23 BLEU points."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"8e23965e62d9a4f84bb0466b33d91f9ccaf89dd0",
"e6cbebed5f30d1c21f785743f666a78e3ee3fa5e"
],
"answer": [
{
"evidence": [
"2) Our empirical experiments on Chinese-English translation and English-Japanese translation tasks show the efficacy of our methods. For Chinese-English translation, we can obtain an average improvement of 2.23 BLEU points. For English-Japanese translation, the improvement can reach 1.96 BLEU points. We further find that the phrase table is much more beneficial than bilingual lexicons to NMT."
],
"extractive_spans": [
"Chinese-English",
"English-Japanese"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our empirical experiments on Chinese-English translation and English-Japanese translation tasks show the efficacy of our methods."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"2) Our empirical experiments on Chinese-English translation and English-Japanese translation tasks show the efficacy of our methods. For Chinese-English translation, we can obtain an average improvement of 2.23 BLEU points. For English-Japanese translation, the improvement can reach 1.96 BLEU points. We further find that the phrase table is much more beneficial than bilingual lexicons to NMT."
],
"extractive_spans": [
"Chinese-English ",
"English-Japanese"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our empirical experiments on Chinese-English translation and English-Japanese translation tasks show the efficacy of our methods. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"57b5cf1f3783f62ec0681d51603283f51913243b",
"6c092c4659acb7fab0fee64e3c604fe9f0f72568"
],
"answer": [
{
"evidence": [
"We test the proposed methods on Chinese-to-English (CH-EN) translation and English-to-Japanese (EN-JA) translation. In CH-EN translation, we test the proposed methods with two data sets: 1) small data set, which includes 0.63M sentence pairs; 2) large-scale data set, which contains about 2.1M sentence pairs. NIST 2003 (MT03) dataset is used for validation. NIST2004-2006 (MT04-06) and NIST 2008 (MT08) datasets are used for testing. In EN-JA translation, we use KFTT dataset, which includes 0.44M sentence pairs for training, 1166 sentence pairs for validation and 1160 sentence pairs for testing."
],
"extractive_spans": [
"NIST 2003 (MT03)",
"NIST2004-2006 (MT04-06)",
"NIST 2008 (MT08)",
"KFTT "
],
"free_form_answer": "",
"highlighted_evidence": [
"In CH-EN translation, we test the proposed methods with two data sets: 1) small data set, which includes 0.63M sentence pairs; 2) large-scale data set, which contains about 2.1M sentence pairs. NIST 2003 (MT03) dataset is used for validation. NIST2004-2006 (MT04-06) and NIST 2008 (MT08) datasets are used for testing. In EN-JA translation, we use KFTT dataset, which includes 0.44M sentence pairs for training, 1166 sentence pairs for validation and 1160 sentence pairs for testing."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We test the proposed methods on Chinese-to-English (CH-EN) translation and English-to-Japanese (EN-JA) translation. In CH-EN translation, we test the proposed methods with two data sets: 1) small data set, which includes 0.63M sentence pairs; 2) large-scale data set, which contains about 2.1M sentence pairs. NIST 2003 (MT03) dataset is used for validation. NIST2004-2006 (MT04-06) and NIST 2008 (MT08) datasets are used for testing. In EN-JA translation, we use KFTT dataset, which includes 0.44M sentence pairs for training, 1166 sentence pairs for validation and 1160 sentence pairs for testing."
],
"extractive_spans": [
"NIST 2003",
"NIST2004-2006",
"NIST 2008",
"KFTT"
],
"free_form_answer": "",
"highlighted_evidence": [
"NIST 2003 (MT03) dataset is used for validation. NIST2004-2006 (MT04-06) and NIST 2008 (MT08) datasets are used for testing. In EN-JA translation, we use KFTT dataset, which includes 0.44M sentence pairs for training, 1166 sentence pairs for validation and 1160 sentence pairs for testing."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"8590a79d0914e1928a8d0d53fb0c01abf4fcf8c4",
"dfaeb83c6688912fc8561a2ade63c461154365e7"
],
"answer": [
{
"evidence": [
"2) Baseline: It is the baseline attention-based NMT system BIBREF23 , BIBREF24 ."
],
"extractive_spans": [
"attention-based NMT system BIBREF23 , BIBREF24"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baseline: It is the baseline attention-based NMT system BIBREF23 , BIBREF24 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare our method with other relevant methods as follows:",
"1) Moses: It is a widely used phrasal SMT system BIBREF22 .",
"2) Baseline: It is the baseline attention-based NMT system BIBREF23 , BIBREF24 ."
],
"extractive_spans": [
" BIBREF23 , BIBREF24"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our method with other relevant methods as follows:\n\n1) Moses: It is a widely used phrasal SMT system BIBREF22 .\n\n2) Baseline: It is the baseline attention-based NMT system BIBREF23 , BIBREF24 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"03ddb61531b5231c62e0080c7e3ee6ab46ca9aca",
"eb20b56ce7bbe804cfc58bb067d5185c634b7ad5"
],
"answer": [
{
"evidence": [
"Table 1 reports the detailed translation results for different methods. Comparing the first two rows in Table 1, it is very obvious that the attention-based NMT system Baseline substantially outperforms the phrase-based SMT system Moses on both CH-EN translation and EN-JA translation. The average improvement for CH-EN and EN-JA translation is up to 3.99 BLEU points (32.71 vs. 28.72) and 3.59 BLEU (25.99 vs. 22.40) points, respectively."
],
"extractive_spans": [],
"free_form_answer": "The average improvement for CH-EN is 3.99 BLEU points, for EN-JA it is 3.59 BLEU points.",
"highlighted_evidence": [
"The average improvement for CH-EN and EN-JA translation is up to 3.99 BLEU points (32.71 vs. 28.72) and 3.59 BLEU (25.99 vs. 22.40) points, respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first question we are interested in is whether or not phrase translation table can improve the translation quality of NMT. Compared to the baseline, our method markedly improves the translation quality on both CH-EN translation and EN-JA translation. In CH-EN translation, the average improvement is up to 2.23 BLEU points (34.94 vs. 32.71). In EN-JA translation, the improvement can reach 1.96 BLEU points (27.95 vs. 25.99). It indicates that incorporating a phrase table into NMT can substantially improve NMT's translation quality."
],
"extractive_spans": [],
"free_form_answer": "In CH-EN translation, the average improvement is up to 2.23 BLEU points, in EN-JA translation, the improvement can reach 1.96 BLEU point.",
"highlighted_evidence": [
"Compared to the baseline, our method markedly improves the translation quality on both CH-EN translation and EN-JA translation. In CH-EN translation, the average improvement is up to 2.23 BLEU points (34.94 vs. 32.71). In EN-JA translation, the improvement can reach 1.96 BLEU points (27.95 vs. 25.99)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"what were the evaluation metrics?",
"what language pairs are explored?",
"what datasets did they use?",
"which attention based nmt method did they compare with?",
"by how much did their system improve?"
],
"question_id": [
"74a17eb3bf1d4f36e2db1459a342c529b9785f6e",
"4b6745982aa64fbafe09f7c88c8d54d520b3f687",
"6656a9472499331f4eda45182ea697a4d63e943c",
"430ad71a0fd715a038f3c0fe8d7510e9730fba23",
"b79ff0a50bf9f361c5e5fed68525283856662076"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 2: The procedure of constructing the target side prefix tree from candidate target phrase set for a source sentence.",
"Table 1: Translation results (BLEU score) for different translation methods. “∗” indicates that it is statistically significant better (p < 0.05) than Baseline and “†” indicates p < 0.01.",
"Table 2: Subjective evaluation of translation faithfulness.",
"Figure 4: Translation examples, where both two methods can improve the baseline system, but our proposed model produces a better translation result.",
"Table 3: Translation results (BLEU score) for different translation methods on large-scale data. “†” indicates that it is statistically significant better (p < 0.01) than Baseline."
],
"file": [
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"5-Figure4-1.png",
"6-Table3-1.png"
]
} | [
"by how much did their system improve?"
] | [
[
"1805.09960-Translation Results-0",
"1805.09960-Effect of Integrating Phrase Translation Table-0"
]
] | [
"In CH-EN translation, the average improvement is up to 2.23 BLEU points, in EN-JA translation, the improvement can reach 1.96 BLEU point."
] | 232 |
1907.00937 | Semantic Product Search | We study the problem of semantic matching in product search, that is, given a customer query, retrieve all semantically related products from the catalog. Pure lexical matching via an inverted index falls short in this respect due to several factors: a) lack of understanding of hypernyms, synonyms, and antonyms, b) fragility to morphological variants (e.g."woman"vs."women"), and c) sensitivity to spelling errors. To address these issues, we train a deep learning model for semantic matching using customer behavior data. Much of the recent work on large-scale semantic search using deep learning focuses on ranking for web search. In contrast, semantic matching for product search presents several novel challenges, which we elucidate in this paper. We address these challenges by a) developing a new loss function that has an inbuilt threshold to differentiate between random negative examples, impressed but not purchased examples, and positive examples (purchased items), b) using average pooling in conjunction with n-grams to capture short-range linguistic patterns, c) using hashing to handle out of vocabulary tokens, and d) using a model parallel training architecture to scale across 8 GPUs. We present compelling offline results that demonstrate at least 4.7% improvement in Recall@100 and 14.5% improvement in mean average precision (MAP) over baseline state-of-the-art semantic search methods using the same tokenization method. Moreover, we present results and discuss learnings from online A/B tests which demonstrate the efficacy of our method. | {
"paragraphs": [
[
"At a high level, as shown in Figure FIGREF4 , a product search engine works as follows: a customer issues a query, which is passed to a lexical matching engine (typically an inverted index BIBREF0 , BIBREF1 ) to retrieve all products that contain words in the query, producing a match set. The match set passes through stages of ranking, wherein top results from the previous stage are re-ranked before the most relevant items are finally displayed. It is imperative that the match set contain a relevant and diverse set of products that match the customer intent in order for the subsequent rankers to succeed. However, inverted index-based lexical matching falls short in several key aspects:",
"In this paper, we address the question: Given rich customer behavior data, can we train a deep learning model to retrieve matching products in response to a query? Intuitively, there is reason to believe that customer behavior logs contain semantic information; customers who are intent on purchasing a product circumvent the limitations of lexical matching by query reformulation or by deeper exploration of the search results. The challenge is the sheer magnitude of the data as well as the presence of noise, a challenge that modern deep learning techniques address very effectively.",
"Product search is different from web search as the queries tend to be shorter and the positive signals (purchases) are sparser than clicks. Models based on conversion rates or click-through-rates may incorrectly favor accessories (like a phone cover) over the main product (like a cell phone). This is further complicated by shoppers maintaining multiple intents during a single search session: a customer may be looking for a specific television model while also looking for accessories for this item at the lowest price and browsing additional products to qualify for free shipping. A product search engine should reduce the effort needed from a customer with a specific mission (narrow queries) while allowing shoppers to explore when they are looking for inspiration (broad queries).",
"As mentioned, product search typically operates in two stages: matching and ranking. Products that contain words in the query ( INLINEFORM0 ) are the primary candidates. Products that have prior behavioral associations (products bought or clicked after issuing a query INLINEFORM1 ) are also included in the candidate set. The ranking step takes these candidates and orders them using a machine-learned rank function to optimize for customer satisfaction and business metrics.",
"We present a neural network trained with large amounts of purchase and click signals to complement a lexical search engine in ad hoc product retrieval. Our first contribution is a loss function with a built-in threshold to differentiate between random negative, impressed but not purchased, and purchased items. Our second contribution is the empirical result that recommends average pooling in combination with INLINEFORM0 -grams that capture short-range linguistic patterns instead of more complex architectures. Third, we show the effectiveness of consistent token hashing in Siamese networks for zero-shot learning and handling out of vocabulary tokens.",
"In Section SECREF2 , we highlight related work. In Section SECREF3 , we describe our model architecture, loss functions, and tokenization techniques including our approach for unseen words. We then introduce the readers to the data and our input representations for queries and products in Section SECREF4 . Section SECREF5 presents the evaluation metrics and our results. We provide implementation details and optimizations to efficiently train the model with large amounts of data in Section SECREF6 . Finally, we conclude in Section SECREF7 with a discussion of future work."
],
[
"There is a rich literature in natural language processing (NLP) and information retrieval (IR) on capturing the semantics of queries and documents. Word2vec BIBREF4 garnered significant attention by demonstrating the use of word embeddings to capture semantic structure; synonyms cluster together in the embedding space. This technique was successfully applied to document ranking for web search with the DESM model BIBREF5 . Building from the ideas in word2vec, BIBREF6 trained neural word embeddings to find neighboring words to expand queries with synonyms. Ultimately, based on these recent advancements and other key insights, the state-of-the-art models for semantic search can generally be classified into three categories:",
" BIBREF7 introduced Latent Semantic Analysis (LSA), which computes a low-rank factorization of a term-document matrix to identify semantic concepts and was further refined by BIBREF8 , BIBREF9 and extended by ideas from Latent Dirichlet Allocation (LDA) BIBREF10 in BIBREF11 . In 2013, BIBREF12 published the seminal paper in the space of factorized models by introducing the Deep Semantic Similarity Model (DSSM). Inspired by LSA and Semantic Hashing BIBREF13 , DSSM involves training an end-to-end deep neural network with a discriminative loss to learn a fixed-width representation for queries and documents. Fully connected units in the DSSM architecture were subsequently replaced with Convolutional Neural Networks (CNNs) BIBREF14 , BIBREF15 and Recurrent Neural Networks (RNNs) BIBREF16 to respect word ordering. In an alternate approach, which articulated the idea of interaction models, BIBREF17 introduced the Deep Relevance Matching Model (DRMM) which leverages an interaction matrix to capture local term matching within neural approaches which has been successfully extended by MatchPyramid BIBREF18 and other techniques BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . Nevertheless, these interaction methods require memory and computation proportional to the number of words in the document and hence are prohibitively expensive for online inference. In addition, Duet BIBREF24 combines the approaches of DSSM and DRMM to balance the importance of semantic and lexical matching. Despite obtaining state-of-the-art results for ranking, these methods report limited success on ad hoc retrieval tasks BIBREF24 and only achieve a sub-50% Recall@100 and MAP on our product matching dataset, as shown with the ARC-II and Match Pyramid baselines in Table TABREF30 .",
"While we frequently evaluate our hypotheses on interaction matrix-based methods, we find that a factorized model architecture achieves comparable performance while only requiring constant memory per product. Hence, we only present our experiments as it pertains to factorized models in this paper. Although latent factor models improve ranking metrics due to their ability to memorize associations between the query and the product, we exclude it from this paper as we focus on the matching task. Our choice of model architecture was informed by empirical experiments while constrained by the cost per query and our ability to respond within 20 milliseconds for thousands of queries per second."
],
[
"Our neural network architecture is shown in Figure FIGREF9 . As in the distributed arm of the Duet model, our first model component is an embedding layer that consists of INLINEFORM0 parameters where INLINEFORM1 is the vocabulary and INLINEFORM2 is the embedding dimension. Each row corresponds to the parameters for a word. Unlike Duet, we share our embeddings across the query and product. Intuitively, sharing the embedding layer in a Siamese network works well, capturing local word-level matching even before training these networks. Our experiments in Table UID37 confirm this intuition. We discuss the specifics of our query and product representation in Section SECREF4 .",
"To generate a fixed length embedding for the query ( INLINEFORM0 ) and the product ( INLINEFORM1 ) from individual word embeddings, we use average pooling after observing little difference (<0.5%) in both MAP and Recall@100 relative to recurrent approaches like LSTM and GRU (see Table TABREF27 ). Average pooling also requires far less computation, reducing training time and inference latency. We reconciled this departure from state-of-the-art solutions for Question Answering and other NLP tasks through an analysis that showed that, unlike web search, both query and product information tend to be shorter, without long-range dependencies. Additionally, product search queries do not contain stop words and typically require every query word (or its synonym) to be present in the product.",
"Queries typically have fewer words than the product content. Because of this, we observed a noticeable difference in the magnitude of query and product embeddings. This was expected as the query and the product models were shared with no additional parameters to account for this variance. Hence, we introduced Batch Normalization layers BIBREF25 after the pooling layers for the query and the product arms. Finally, we compute the cosine similarity between INLINEFORM0 and INLINEFORM1 . During online A/B testing, we precompute INLINEFORM2 for all the products in the catalog and use a INLINEFORM3 -Nearest Neighbors algorithm to retrieve the most similar products to a given query INLINEFORM4 ."
],
[
"A critical decision when employing a vector space model is defining a match, especially in product search where there is an important tradeoff between precision and recall. For example, accessories like mounts may also be relevant for the query “led tv.”",
"Pruning results based on a threshold is a common practice to identify the match set. Pointwise loss functions, such as mean squared error (MSE) or mean absolute error (MAE), require an additional step post-training to identify the threshold. Pairwise loss functions do not provide guarantees on the magnitude of scores (only on relative ordering) and thus do not work well in practice with threshold-based pruning. Hence, we started with a pointwise 2-part hinge loss function as shown in Equation ( EQREF11 ) that maximizes the similarity between the query and a purchased product while minimizing the similarity between a query and random products. Define INLINEFORM0 , and let INLINEFORM1 if product INLINEFORM2 is purchased in response to query INLINEFORM3 , and INLINEFORM4 otherwise. Furthermore let INLINEFORM5 , and INLINEFORM6 for some predefined thresholds INLINEFORM7 and INLINEFORM8 and INLINEFORM9 . The two part hinge loss can be defined as DISPLAYFORM0 ",
" Intuitively, the loss ensures that when INLINEFORM0 then INLINEFORM1 is less than INLINEFORM2 and when INLINEFORM3 then INLINEFORM4 is above INLINEFORM5 . After some empirical tuning on a validation set, we set INLINEFORM6 and INLINEFORM7 .",
"As shown in Table TABREF26 , the 2-part hinge loss improved offline matching performance by more than 2X over the MSE baseline. However in Figure FIGREF12 , a large overlap in score distribution between positives and negatives can be seen. Furthermore, the score distribution for negatives appeared bimodal. After manually inspecting the negative training examples that fell in this region, we uncovered that these were products that were impressed but not purchased by the customer. From a matching standpoint, these products are usually valid results to show to customers. To improve the model's ability to distinguish positives and negatives considering these two classes of negatives, we introduced a 3-part hinge loss: DISPLAYFORM0 ",
" where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 denote indicators signifying if the product INLINEFORM3 was purchased, not impressed and not purchased, and impressed (but not purchased) in response to the query INLINEFORM4 , respectively, and INLINEFORM5 . Based on the 2-part hinge score distribution, INLINEFORM6 was set to INLINEFORM7 with INLINEFORM8 and INLINEFORM9 as before. The effectiveness of this strategy can be seen in Figure FIGREF14 , where one can observe a clear separation in scores between random and impressed negatives vs positives."
],
[
"In this section, we describe our tokenization methodology, or the procedure by which we break a string into a sequence of smaller components such as words, phrases, sub-words, or characters. We combine word unigram, word n-gram, and character trigram features into a bag of n-grams and use hashing to handle the large vocabulary size, similar to the fastText approach BIBREF26 .",
"This is the basic form of tokenization where the input query or product title is tokenized into a list of words. For example, the word unigrams of \"artistic iphone 6s case\" are [\"artistic\", \"iphone\", \"6s\", \"case\"].",
"In a bag of words model like ours, word unigrams lose word ordering. Instead of using LSTMs or CNNs to address this issue, we opted for INLINEFORM0 -grams as in BIBREF27 . For example, the word bigrams of \"artistic iphone 6s case\" are [\"artistic#iphone\", \"iphone#6s\", \"6s#case\"] and the trigrams are [\"artistic#iphone#6s\", \"iphone#6s#case\"]. These INLINEFORM1 -grams capture phrase-level information; for example if “for iphone” exists in the query, the model can infer that the customer's intention is to search for iphone accessories rather than iphone — an intent not captured by a unigram model.",
"Character trigram embeddings were proposed by the DSSM paper BIBREF12 . The string is broken into a list of all three-character sequences. For the example \"artistic iphone 6s case\", the character trigrams are [\"#ar\", \"art\", \"rti\", \"tis\", \"ist\", \"sti\", \"tic\", \"ic#\", \"c#i\", \"#ip\", \"iph\", \"pho\", \"hon\", \"one\", \"ne#\", \"e#6\", \"#6s\", \"6s#\", \"s#c\", \"#ca\", \"cas\", \"ase\", \"se#\"]. Character trigrams are robust to typos (“iphione” and “iphonr”) and handle compound words (“amazontv” and “firetvstick”) naturally. Another advantage in our setting is the ability to capture similarity of model parts and sizes.",
"It is computationally infeasible to maintain a vocabulary that includes all the possible word INLINEFORM0 -grams as the dictionary size grows exponentially with INLINEFORM1 . Thus, we maintain a \"short\" list of several tens or hundreds of thousands of INLINEFORM2 -grams based on token frequency. A common practice for most NLP applications is to mask the input or use the embedding from the 0th location when an out-of-vocabulary word is encountered. Unfortunately, in Siamese networks, assigning all unknown words to the same shared embedding location results in incorrectly mapping two different out-of-vocabulary words to the same representation. Hence, we experimented with using the “hashing trick\" BIBREF28 popularized by Vowpal Wabbit to represent higher order INLINEFORM3 -grams that are not present in the vocabulary. In particular, we hash out-of-vocabulary tokens to additional embedding bins. The combination of using a fixed hash function and shared embeddings ensures that unseen tokens that occur in both the query and document map to the same embedding vector. During our initial experiments with a bin size of 10,000, we noticed that hashing collisions incorrectly promoted irrelevant products for queries, led to overfitting, and did not improve offline metrics. However, setting a bin size 5-10 times larger than the vocabulary size improved the recall of the model.",
"There are several ways to combine the tokens from these tokenization methods. One could create separate embeddings for unigrams, bigrams, character trigrams, etc. and compute a weighted sum over the cosine similarity of these INLINEFORM0 -gram projections. But we found that the simple approach of combining all tokens in a single bag-of-tokens performs well. We conclude this section by referring the reader to Figure FIGREF21 , which walks through our tokenization methods for the example “artistic iphone 6s case”. In Table UID33 , we show example queries and products retrieved to highlight the efficacy of our best model to understand synonyms, intents, spelling errors and overall robustness."
],
[
"We use 11 months of search logs as training data and 1 month as evaluation. We sample 54 billion query-product training pairs. We preprocess these sampled pairs to 650 million rows by grouping the training data by query-product pairs over the entire time period and using the aggregated counts as weights for the pairs. We also decrease the training time by 3X by preprocessing the training data into tokens and using mmap to store the tokens. More details on our best practices for reducing training time can be found in Section SECREF6 .",
"For a given customer query, each product is in exactly one of three categories: purchased, impressed but not purchased, or random. For each query, we target a ratio of 6 impressed and 7 random products for every query-product purchase. We sample this way to train the model for both matching and ranking, although in this paper we focus on matching. Intuitively, matching should differentiate purchased and impressed products from random ones; ranking should differentiate purchased products from impressed ones.",
"We choose the most frequent words to build our vocabulary, referred to as INLINEFORM0 . Each token in the vocabulary is assigned a unique numeric token id, while remaining tokens are assigned 0 or a hashing based identifier. Queries are lowercased, split on whitespace, and converted into a sequence of token ids. We truncate the query tokens at the 99th percentile by length. Token vectors that are smaller than the predetermined length are padded to the right.",
"Products have multiple attributes, like title, brand, and color, that are material to the matching process. We evaluated architectures to embed every attribute independently and concatenate them to obtain the final product representation. However, large variability in the accuracy and availability of structured data across products led to 5% lower recall than simply concatenating the attributes. Hence, we decided to use an ordered bag of words of these attributes."
],
[
"In this section we describe our metrics, training procedure, and the results, including the impact of our method in production."
],
[
"We define two evaluation subtasks: matching and ranking.",
"Matching: The goal of the matching task is to retrieve all relevant documents from a large corpus for a given query. In order to measure the matching performance, we first sample a set of 20K queries. We then evaluate the model's ability to recall purchased products from a sub-corpus of 1 million products for those queries. Note that the 1 million product corpus contains purchased and impressed products for every query from the evaluation period as well as additional random negatives. We tune the model hyperparameters to maximize Recall@100 and Mean Average Precision (MAP).",
"Ranking: The goal of this task is to order a set of documents by relevance, defined as purchase count conditioned on the query. The set of documents contains purchased and impressed products. We report standard information retrieval ranking metrics, such as Normalized Discounted Cumulative Gain (NDCG) and Mean Reciprocal Rank (MRR)."
],
[
"In this section, we present the durable learnings from thousands of experiments. We fix the embedding dimension to 256, weight matrix initialization to Xavier initialization BIBREF29 , batch size to 8192, and the optimizer to ADAM with the configuration INLINEFORM0 for all the results presented. We refer to the hinge losses defined in Section SECREF10 with INLINEFORM1 and INLINEFORM2 as the L1 and L2 variants respectively. Unigram tokenization is used in Table TABREF26 and Table TABREF27 , as the relative ordering of results does not change with other more sophisticated tokenizations.",
"We present the results of different loss functions in Table TABREF26 . We see that the L2 variant of each loss consistently outperforms the L1. We hypothesize that L2 variants are robust to outliers in cosine similarity. The 3-part hinge loss outperforms the 2-part hinge loss in matching metrics in all experiments although the two loss functions have similar ranking performance. By considering impressed negatives, whose text is usually more similar to positives than negatives, separately from random negatives in the 3-part hinge loss, the scores for positives and random negatives become better separated, as shown in Section SECREF10 . The model can better differentiate between positives and random negatives, improving Recall and MAP. Because the ranking task is not distinguishing between relevant and random products but instead focuses on ordering purchased and impressed products, it is not surprising that the 2-part and 3-part loss functions have similar performance.",
"In Table TABREF27 , we present the results of using LSTM, GRU, and averaging to aggregate the token embeddings. Averaging performs similar to or slightly better than recurrent units with significantly less training time. As mentioned in Section SECREF8 , in the product search setting, queries and product titles tend to be relatively short, so averaging is sufficient to capture the short-range dependencies that exist in queries and product titles. Furthermore, recurrent methods are more expressive but introduce specialization between the query and title. Consequently, local word-level matching between the query and the product title may not be not captured as well.",
"In Table TABREF28 , we compare the performance of using different tokenization methods. We use average pooling and the 3-part L2 hinge loss. For each tokenization method, we select the top INLINEFORM0 terms by frequency in the training data. Unless otherwise noted, INLINEFORM1 was set to 125K, 25K, 64K, and 500K for unigrams, bigrams, character trigrams, and out-of-vocabulary (OOV) bins respectively. It is worth noting that using only character trigrams, which was an essential component of DSSM BIBREF12 , has competitive ranking but not matching performance compared to unigrams. Adding bigrams improves matching performance as bigrams capture short phrase-level information that is not captured by averaging unigrams. For example, the unigrams for “chocolate milk” and “milk chocolate” are the same although these are different products. Additionally including character trigrams improves the performance further as character trigrams provide generalization and robustness to spelling errors.",
"Adding OOV hashing improves the matching performance as it allows better generalization to infrequent or unseen terms, with the caveat that it introduces additional parameters. To differentiate between the impact of additional parameters and OOV hashing, the last two rows in Table TABREF28 compare 500K unigrams to 125K unigrams and 375K OOV bins. These models have the same number of parameters, but the model with OOV hashing performs better.",
"In Table TABREF29 , we present the results of using batch normalization, layer normalization, or neither on the aggregated query and product embeddings. The “Query Sorted” column refers to whether all positive, impressed, and random negative examples for a single query appear together or are shuffled throughout the data. The best matching performance is achieved using batch normalization and shuffled data. Using sorted data has a significantly negative impact on performance when using batch normalization but not when using layer normalization. Possibly, the batch estimates of mean and variance are highly biased when using sorted data.",
"Finally, in Table TABREF30 , we compare the results of our model to four baselines: DSSM BIBREF12 , Match Pyramid BIBREF18 , ARC-II BIBREF15 , and our model with frozen, randomly initialized embeddings. We only use word unigrams or character trigrams in our model, as it is not immediately clear how to extend the bag-of-tokens approach to methods that incorporate ordering. We compare the performance of using the 3-part L2 hinge loss to the original loss presented for each model. Across all baselines, matching performance of the model improves using the 3-part L2 hinge loss. ARC-II and Match Pyramid ranking performance is similar or lower when using the 3-part loss. Ranking performance improves for DSSM, possibly because the original approach uses only random negatives to approximate the softmax normalization. More complex models, like Match Pyramid and ARC-II, had significantly lower matching and ranking performance while taking significantly longer to train and evaluate. These models are also much harder to tune and tend to overfit.",
"The embeddings in our model are trained end-to-end. Previous experiments using other methods, including Glove and word2vec, to initialize the embeddings yielded poorer results than end-to-end training. When comparing our model with randomly initialized to one with trained embeddings, we see that end-to-end training results in over a 3X improvement in Recall@100 and MAP."
]
],
"section_name": [
"Introduction",
"Related Work",
"Neural Network Architecture",
"Loss Function",
"Tokenization Methods ",
"Data",
"Experiments",
"Metrics",
"Results"
]
} | {
"answers": [
{
"annotation_id": [
"03ed89ed3e19b04f3d8ef43becbf4fe2c7bacdd2",
"dcd23178e17cee5811f9c84977f71bc760820289"
],
"answer": [
{
"evidence": [
"Finally, in Table TABREF30 , we compare the results of our model to four baselines: DSSM BIBREF12 , Match Pyramid BIBREF18 , ARC-II BIBREF15 , and our model with frozen, randomly initialized embeddings. We only use word unigrams or character trigrams in our model, as it is not immediately clear how to extend the bag-of-tokens approach to methods that incorporate ordering. We compare the performance of using the 3-part L2 hinge loss to the original loss presented for each model. Across all baselines, matching performance of the model improves using the 3-part L2 hinge loss. ARC-II and Match Pyramid ranking performance is similar or lower when using the 3-part loss. Ranking performance improves for DSSM, possibly because the original approach uses only random negatives to approximate the softmax normalization. More complex models, like Match Pyramid and ARC-II, had significantly lower matching and ranking performance while taking significantly longer to train and evaluate. These models are also much harder to tune and tend to overfit."
],
"extractive_spans": [
"DSSM",
"Match Pyramid",
"ARC-II",
"our model with frozen, randomly initialized embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, in Table TABREF30 , we compare the results of our model to four baselines: DSSM BIBREF12 , Match Pyramid BIBREF18 , ARC-II BIBREF15 , and our model with frozen, randomly initialized embeddings. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Finally, in Table TABREF30 , we compare the results of our model to four baselines: DSSM BIBREF12 , Match Pyramid BIBREF18 , ARC-II BIBREF15 , and our model with frozen, randomly initialized embeddings. We only use word unigrams or character trigrams in our model, as it is not immediately clear how to extend the bag-of-tokens approach to methods that incorporate ordering. We compare the performance of using the 3-part L2 hinge loss to the original loss presented for each model. Across all baselines, matching performance of the model improves using the 3-part L2 hinge loss. ARC-II and Match Pyramid ranking performance is similar or lower when using the 3-part loss. Ranking performance improves for DSSM, possibly because the original approach uses only random negatives to approximate the softmax normalization. More complex models, like Match Pyramid and ARC-II, had significantly lower matching and ranking performance while taking significantly longer to train and evaluate. These models are also much harder to tune and tend to overfit."
],
"extractive_spans": [
"DSSM BIBREF12 , Match Pyramid BIBREF18 , ARC-II BIBREF15 "
],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, in Table TABREF30 , we compare the results of our model to four baselines: DSSM BIBREF12 , Match Pyramid BIBREF18 , ARC-II BIBREF15 , and our model with frozen, randomly initialized embeddings. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"70fe54af59ca9b57919f88b4c2d04bd62f9553fd",
"f37564aa05c69fa5f7d8a3f1ba703110e4485a04"
],
"answer": [
{
"evidence": [
"We use 11 months of search logs as training data and 1 month as evaluation. We sample 54 billion query-product training pairs. We preprocess these sampled pairs to 650 million rows by grouping the training data by query-product pairs over the entire time period and using the aggregated counts as weights for the pairs. We also decrease the training time by 3X by preprocessing the training data into tokens and using mmap to store the tokens. More details on our best practices for reducing training time can be found in Section SECREF6 ."
],
"extractive_spans": [],
"free_form_answer": "a self-collected dataset of 11 months of search logs as query-product pairs",
"highlighted_evidence": [
"We use 11 months of search logs as training data and 1 month as evaluation. We sample 54 billion query-product training pairs. We preprocess these sampled pairs to 650 million rows by grouping the training data by query-product pairs over the entire time period and using the aggregated counts as weights for the pairs. We also decrease the training time by 3X by preprocessing the training data into tokens and using mmap to store the tokens. More details on our best practices for reducing training time can be found in Section SECREF6 .\n\n"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use 11 months of search logs as training data and 1 month as evaluation. We sample 54 billion query-product training pairs. We preprocess these sampled pairs to 650 million rows by grouping the training data by query-product pairs over the entire time period and using the aggregated counts as weights for the pairs. We also decrease the training time by 3X by preprocessing the training data into tokens and using mmap to store the tokens. More details on our best practices for reducing training time can be found in Section SECREF6 ."
],
"extractive_spans": [
"11 months of search logs"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use 11 months of search logs as training data and 1 month as evaluation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"What were the baseline methods?",
"What dataset is used for training?"
],
"question_id": [
"d66c31f24f582c499309a435ec3c688dc3a41313",
"c47312f2ca834ee75fa9bfbf912ea04239064117"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: System architecture for augmenting product matching using semantic matching",
"Figure 2: Illustration of neural network architecture used for semantic search",
"Figure 3: Score distribution histogram shows large overlap for positives (right) and negatives (left) alongwith a bimodal distribution for negatives when using the 2-part hinge",
"Figure 4: Score distribution shows clear separation between purchased (right), seen but not purchased (center), and irrelevant products (left) when using the 3-part hinge",
"Figure 5: Aggregation of different tokenization methods illustrated with the processing of “artistic iphone 6s case”",
"Table 1: Loss Function Experiments using Unigram Tokenization and Average Pooling",
"Table 2: Token Embedding Aggregation Experiments using Unigram Tokenization",
"Table 3: Tokenization Experiments with Average Pooling and 3 Part L2 Hinge Loss",
"Table 4: Normalization Layer Experiments",
"Table 5: Comparison with Baselines",
"Figure 6: Training timewith various embedding dimensions",
"Table 6: Example Queries and Matched Products",
"Table 7: Shared versus Decoupled Embeddings for Query and Product",
"Table 8: Impact of Out-of-Vocabulary Bin Size"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"5-Figure5-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png",
"8-Figure6-1.png",
"9-Table6-1.png",
"10-Table7-1.png",
"10-Table8-1.png"
]
} | [
"What dataset is used for training?"
] | [
[
"1907.00937-Data-0"
]
] | [
"a self-collected dataset of 11 months of search logs as query-product pairs"
] | 233 |
1811.01183 | Unsupervised Identification of Study Descriptors in Toxicology Research: An Experimental Study | Identifying and extracting data elements such as study descriptors in publication full texts is a critical yet manual and labor-intensive step required in a number of tasks. In this paper we address the question of identifying data elements in an unsupervised manner. Specifically, provided a set of criteria describing specific study parameters, such as species, route of administration, and dosing regimen, we develop an unsupervised approach to identify text segments (sentences) relevant to the criteria. A binary classifier trained to identify publications that met the criteria performs better when trained on the candidate sentences than when trained on sentences randomly picked from the text, supporting the intuition that our method is able to accurately identify study descriptors. | {
"paragraphs": [
[
"Support for this research was provided by a grant from the National Institute of Environmental Health Sciences (AES 16002-001), National Institutes of Health to Oak Ridge National Laboratory.",
"This research was supported in part by an appointment to the Oak Ridge National Laboratory ASTRO Program, sponsored by the U.S. Department of Energy and administered by the Oak Ridge Institute for Science and Education.",
"This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan."
],
[
"Extracting data elements such as study descriptors from publication full texts is an essential step in a number of tasks including systematic review preparation BIBREF0 , construction of reference databases BIBREF1 , and knowledge discovery BIBREF2 . These tasks typically involve domain experts identifying relevant literature pertaining to a specific research question or a topic being investigated, identifying passages in the retrieved articles that discuss the sought after information, and extracting structured data from these passages. The extracted data is then analyzed, for example to assess adherence to existing guidelines BIBREF1 . Figure FIGREF2 shows an example text excerpt with information relevant to a specific task (assessment of adherence to existing guidelines BIBREF1 ) highlighted.",
"Extracting the data elements needed in these tasks is a time-consuming and at present a largely manual process which requires domain expertise. For example, in systematic review preparation, information extraction generally constitutes the most time consuming task BIBREF4 . This situation is made worse by the rapidly expanding body of potentially relevant literature with more than one million papers added into PubMed each year BIBREF5 . Therefore, data annotation and extraction presents an important challenge for automation.",
"A typical approach to automated identification of relevant information in biomedical texts is to infer a prediction model from labeled training data – such a model can then be used to assign predicted labels to new data instances. However, obtaining training data for creating such prediction models can be very costly as it involves the step which these models are trying to automate – manual data extraction. Furthermore, depending on the task at hand, the types of information being extracted may vary significantly. For example, in systematic reviews of randomized controlled trials this information generally includes the patient group, the intervention being tested, the comparison, and the outcomes of the study (PICO elements) BIBREF4 . In toxicology research the extraction may focus on routes of exposure, dose, and necropsy timing BIBREF1 . Previous work has largely focused on identifying specific pieces of information such as biomedical events BIBREF6 or PICO elements BIBREF0 . However, depending on the domain and the end goal of the extraction, these may be insufficient to comprehensively describe a given study.",
"Therefore, in this paper we focus on unsupervised methods for identifying text segments (such as sentences or fixed length sequences of words) relevant to the information being extracted. We develop a model that can be used to identify text segments from text documents without labeled data and that only requires the current document itself, rather than an entire training corpus linked to the target document. More specifically, we utilize representation learning methods BIBREF7 , where words or phrases are embedded into the same vector space. This allows us to compute semantic relatedness among text fragments, in particular sentences or text segments in a given document and a short description of the type of information being extracted from the document, by using similarity measures in the feature space. The model has the potential to speed up identification of relevant segments in text and therefore to expedite annotation of domain specific information without reliance on costly labeled data.",
"We have developed and tested our approach on a reference database of rodent uterotropic bioassays BIBREF1 which are labeled according to their adherence to test guidelines set forth in BIBREF3 . Each study in the database is assigned a label determining whether or not it met each of six main criteria defined by the guidelines; however, the database does not contain sentence-level annotations or any information about where the criteria was mentioned in each publication. Due to the lack of fine-grained annotations, supervised learning methods cannot be easily applied to aid annotating new publications or to annotate related but distinct types of studies. This database therefore presents an ideal use-case for unsupervised approaches.",
"While our approach doesn't require any labeled data to work, we use the labels available in the dataset to evaluate the approach. We train a binary classification model for identifying publications which satisfied given criteria and show the model performs better when trained on relevant sentences identified by our method than when trained on sentences randomly picked from the text. Furthermore, for three out of the six criteria, a model trained solely on the relevant sentences outperforms a model which utilizes full text. The results of our evaluation support the intuition that semantic relatedness to criteria descriptions can help in identifying text sequences discussing sought after information.",
"There are two main contributions of this work. We present an unsupervised method that employs representation learning to identify text segments from publication full text which are relevant to/contain specific sought after information (such as number of dose groups). In addition, we explore a new dataset which hasn't been previously used in the field of information extraction.",
"The remainder of this paper is organized as follows. In the following section we provide more details of the task and the dataset used in this study. In Section SECREF3 we describe our approach. In Section SECREF4 we evaluate our model and discuss our results. In Section SECREF5 we compare our work to existing approaches. Finally, in Section SECREF6 we provide ideas for further study."
],
[
"This section provides more details about the specific task and the dataset used in our study which motivated the development of our model."
],
[
"Significant efforts in toxicology research are being devoted towards developing new in vitro methods for testing chemicals due to the large number of untested chemicals in use ( INLINEFORM0 75,000-80,000 BIBREF8 , BIBREF1 ) and the cost and time required by existing in vivo methods (2-3 years and millions of dollars per chemical BIBREF8 ). To facilitate the development of novel in vitro methods and assess the adherence to existing study guidelines, a curated database of high-quality in vivo rodent uterotrophic bioassay data extracted from research publications has recently been developed and published BIBREF1 .",
"The creation of the database followed the study protocol design set forth in BIBREF3 , which is composed of six minimum criteria (MC, Table TABREF5 ). An example of information pertaining to the criteria is shown in Figure FIGREF2 . Only studies which met all six minimum criteria were considered guideline-like (GL) and were included in a follow-up detailed study and the final database BIBREF1 . However, of the 670 publications initially considered for inclusion, only 93 ( INLINEFORM0 14%) were found to contain studies which met all six MC and could therefore be included in the final database; the remaining 577 publications could not be used in the final reference set. Therefore, significant time and resources could be saved by automating the identification and extraction of the MC.",
"While each study present in the database is assigned a label for each MC determining whether a given MC was met and the pertinent protocol information was manually extracted, there exist no fine-grained text annotations showing the exact location within each publication's full text where a given criteria was met. Therefore, our goal was to develop a model not requiring detailed text annotations that could be used to expedite the annotation of new publications being added into the database and potentially support the development of new reference databases focusing on different domains and sets of guidelines. Due to the lack of detailed annotations, our focus was on identification of potentially relevant text segments."
],
[
"The version of the database which contains both GL and non-GL studies consists of 670 publications (spanning the years 1938 through 2014) with results from 2,615 uterotrophic bioassays. Specifically, each entry in the database describes one study, and studies are linked to publications using PubMed reference numbers (PMIDs). Each study is assigned seven 0/1 labels – one for each of the minimum criteria and one for the overall GL/non-GL label. The database also contains more detailed subcategories for each label (for example “species” label for MC 1) which were not used in this study. The publication PDFs were provided to us by the database creators. We have used the Grobid library to convert the PDF files into structured text. After removing documents with missing PDF files and documents which were not converted successfully, we were left with 624 full text documents.",
"Each publication contains on average 3.7 studies (separate bioassays), 194 publications contain a single study, while the rest contain two or more studies (with 82 being the most bioassays per publication). The following excerpt shows an example sentence mentioning multiple bioassays (with different study protocols):",
"With the exception of the first study (experiment 1), which had group sizes of 12, all other studies had group sizes of 8.",
"For this experiment we did not distinguish between publications describing a single or multiple studies. Instead, our focus was on retrieving all text segments (which may be related to multiple studies) relevant to each of the criteria. For each MC, if a document contained multiple studies with different labels, we discarded that document from our analysis of that criteria; if a document contained multiple studies with the same label, we simply combine all those labels into a single label. Table TABREF8 shows the final size of the dataset."
],
[
"In this section we describe the method we have used for retrieving text segments related to the criteria described in the previous section. The intuition is based off question answering systems. We treat the criteria descriptions (Table TABREF5 ) as the question and the text segments within the publication that discusses the criteria as the answer. Given a full text publication, the goal is to find the text segments most likely to contain the answer.",
"We represent the criteria descriptions and text segments extracted from the documents as vectors of features, and utilize relatedness measures to retrieve text segments most similar to the descriptions. A similar step is typically performed by most question answering (QA) systems – in QA systems both the input documents and the question are represented as a sequence of embedding vectors and a retrieval system then compares the document and question representations to retrieve text segments most likely containing the answer BIBREF9 .",
"To account for the variations in language that can be used to describe the criteria, we represent words as vectors generated using Word2Vec BIBREF7 . The following two excerpts show two different ways MC 6 was described in text:",
"Animals were killed 24 h after being injected and their uteri were removed and weighed.",
"All animals were euthanized by exposure to ethyl ether 24 h after the final treatment.",
"We hypothesize that the use of word embedding features will allow us to detect relevant words which are not present in the criteria descriptions. BIBREF10 have shown that an important feature of Word2Vec embeddings is that similar words will have similar vectors because they appear in similar contexts. We utilize this feature to calculate similarity between the criteria descriptions and text segments (such as sentences) extracted from each document. A high-level overview of our approach is shown in Figure FIGREF9 .",
"We use the following method to retrieve the most relevant text segments:",
"Segment extraction: First, we break each document down into shorter sequences such as sentences or word sequences of fixed length. While the first option (sentences) results in text which is easier to process, it has the disadvantage of resulting in sequences of varying length which may affect the resulting similarity value. However, for simplicity, in this study we utilize the sentence version.",
"Segment/description representation: We represent each sequence and the input description as a sequence of vector representations. For this study we have utilized Word2Vec embeddings BIBREF7 trained using the Gensim library on our corpus of 624 full text publications.",
"Word to word similarities: Next we calculate similarity between each word vector from each sequence INLINEFORM0 and each word vector from the input description INLINEFORM1 using cosine similarity. The output of this step is a similarity matrix INLINEFORM2 for each sequence INLINEFORM3 , where INLINEFORM4 is the number of unique words in the sequence and INLINEFORM5 is the number of unique words in the description INLINEFORM6 .",
"Segment to description similarities: To obtain a similarity value representing the relatedness of each sequence to the input description we first convert each input matrix INLINEFORM0 into a vector INLINEFORM1 by choosing the maximum similarity value for each word in the sequence, that is INLINEFORM2 . Each sequence is then assigned a similarity value INLINEFORM3 which is calculated as INLINEFORM4 . In the future we are planning to experiment with different ways of calculating relatedness of the sequences to the descriptions, such as with computing similarity of embeddings created from the text fragments using approaches like Doc2Vec BIBREF11 . In this study, after finding the top sentences, we further break each sentence down into continuous n-grams to find the specific part of the sentence discussing the MC. We repeat the same process described above to calculate the relatedness of each n-gram to the description.",
"Candidate segments: For each document we select the top INLINEFORM0 text segments (sentences in the first step and 5-grams in the second step) most similar to the description."
],
[
"Figures FIGREF11 , FIGREF12 , and FIGREF13 show example annotations generated using our method for the first three criteria. For this example we ran our method on the abstract of the target document rather than the full text and highlighted only the single most similar sentence. The abstract used to produce these figures is the same as the abstract shown in Figure FIGREF2 . In all three figures, the lighter yellow color highlights the sentence which was found to be the most similar to a given MC description, the darker red color shows the top 5-gram found within the top sentence, and the bold underlined text is the text we are looking for (the correct answer). Annotations generated for the remaining three criteria are shown in Appendix SECREF7 .",
"Due to space limitations, Figures FIGREF11 , FIGREF12 , and FIGREF13 show results generated on abstracts rather than on full text; however, we have observed similarly accurate results when we applied our method to full text. The only difference between the abstracts and the full text version is how many top sentences we retrieved. When working with abstracts only, we observed that if the criteria was discussed in the abstract, it was generally sufficient to retrieve the single most similar sentence. However, as the criteria may be mentioned in multiple places within the document, when working with full text documents we have retrieved and analyzed the top k sentences instead of just a single sentence. In this case we have typically found the correct sentence/sentences among the top 5 sentences. We have also observed that the similar sentences which don't discuss the criteria directly (i.e. the “incorrect” sentences) typically discuss related topics. For example, consider the following three sentences:",
"After weaning on pnd 21, the dams were euthanized by CO2 asphyxiation and the juvenile females were individually housed.",
"Six CD(SD) rat dams, each with reconstituted litters of six female pups, were received from Charles River Laboratories (Raleigh, NC, USA) on offspring postnatal day (pnd) 16.",
"This validation study followed OECD TG 440, with six female weanling rats (postnatal day 21) per dose group and six treatment groups.",
"These three sentences were extracted from the abstract and the full text of a single document (document 20981862, the abstract of which is shown in Figures FIGREF2 and FIGREF11 - FIGREF21 ). These three sentences were retrieved as the most similar to MC 1, with similarity scores of 70.61, 65.31, and 63.69, respectively. The third sentence contains the “answer” to MC 1 (underlined). However, it can be seen the top two sentences also discuss the animals used in the study (more specifically, the sentences discuss the animals' housing and their origin)."
],
[
"The goal of this experiment was to explore empirically whether our approach truly identifies mentions of the minimum criteria in text. As we did not have any fine-grained annotations that could be used to directly evaluate whether our model identifies the correct sequences, we have used a different methodology. We have utilized the existing 0/1 labels which were available in the database (these were discussed in Section SECREF2 ) to train one binary classifier for each MC. The task of each of the classifiers is to determine whether a publication met the given criteria or not. We have then compared a baseline classifier trained on all full text with three other models:",
"The only difference between the four models is which sentences from each document are passed to the classifier for training and testing. The intuition is that a classifier utilizing the correct sentences should outperform both other models.",
"To avoid selecting the same sentences across the three models we removed documents which contained less than INLINEFORM0 sentences (Table TABREF17 , row Number of documents shows how many documents satisfied this condition). In all of the experiments presented in this section, the publication full text was tokenized, lower-cased, stemmed, and stop words were removed. All models used a Bernoulli Naïve Bayes classifier (scikit-learn implementation which used a uniform class prior) trained on binary occurrence matrices created using 1-3-grams extracted from the publications, with n-grams appearing in only one document removed. The complete results obtained from leave-one-out cross validation are shown in Table TABREF17 . In all cases we report classification accuracy. In the case of the random-k sentences model the accuracy was averaged over 10 runs of the model.",
"We compare the results to two baselines: (1) a baseline obtained by classifying all documents as belonging to the majority class (baseline 1 in Table TABREF17 ) and (2) a baseline obtained using the same setup (features and classification algorithm) as in the case of the top-/random-/bottom-k sentences models but which utilized all full text instead of selected sentences extracted from the text only (baseline 2 in Table TABREF17 )."
],
[
"Table TABREF17 shows that for four out of the six criteria (MC 1, MC 4, MC 5, and MC 6) the top-k sentences model outperforms baseline 1 as well the bottom-k and the random-k sentences models by a significant margin. Furthermore, for three of the six criteria (MC 4, MC 5, and MC 6) the top-k sentences model also outperforms the baseline 2 model (model which utilized all full text). This seems to confirm our hypothesis that semantic relatedness of sentences to the criteria descriptions helps in identifying sentences discussing the criteria. These seems to be the case especially given that for three of the six criteria the top-k sentences model outperforms the model which utilizes all full text (baseline 2) despite being given less information to learn from (selected sentences only in the case of the top-k sentences model vs. all full text in the case of the baseline 2 model).",
"For two of the criteria (MC 2 and MC 3) this is not the case and the top-k sentences model performs worse than both other models in the case of MC 3 and worse than the random-k model in the case of MC 2. One possible explanation for this is class imbalance. In the case of MC 2, only 33 out of 592 publications (5.57%) represent negative examples (Table TABREF17 ). As the top-k sentences model picks only sentences closely related to MC 2, it is possible that due to the class imbalance the top sentences don't contain enough negative examples to learn from. On the other hand, the bottom-k and random-k sentences models may select text not necessarily related to the criteria but potentially containing linguistic patterns which the model learns to associate with the criteria; for example, certain chemicals may require the use of a certain study protocol which may not be aligned with the MC and the model may key in on the appearance of these chemicals in text rather than the appearance of MC indicators. The situation is similar in the case of MC 3. We would like to emphasize that the goal of this experiment was not to achieve state-of-the-art results but to investigate empirically the viability of utilizing semantic relatedness of text segments to criteria descriptions for identifying relevant segments."
],
[
"In this section we present studies most similar to our work. We focus on unsupervised methods for information extraction from biomedical texts.",
"Many methods for biomedical data annotation and extraction exist which utilize labeled data and supervised learning approaches ( BIBREF12 and BIBREF6 provided a good overview of a number of these methods); however, unsupervised approaches in this area are much scarcer. One such approach has been introduced by BIBREF13 , who have proposed a model for unsupervised Named Entity Recognition. Similar to our approach, their model is based on calculating the similarity between vector representations of candidate phrases and existing entities. However, their vector representations are created using a combination of TF-IDF weights and word context information, and their method relies on a terminology. More recently, BIBREF14 have utilized Word2Vec and Doc2Vec embeddings for unsupervised sentiment classification in medical discharge summaries.",
"A number of previous studies have focused on unsupervised extraction of relations such as protein-protein interactions (PPI) from biomedical texts. For example, BIBREF15 have utilized several techniques, namely kernel-based pattern clustering and dependency parsing, to extract PPI from biomedical texts. BIBREF16 have introduced a system for unsupervised extraction of entities and relations between these entities from clinical texts written in Italian, which utilized a thesaurus for extraction of entities and clustering methods for relation extraction. BIBREF17 also used clinical texts and proposed a generative model for unsupervised relation extraction. Another approach focusing on relation extraction has been proposed by BIBREF18 . Their approach is based on constructing a graph which is used to construct domain-independent patterns for extracting protein-protein interactions.",
"A similar but distinct approach to unsupervised extraction is distant supervision. Similarly as unsupervised extraction methods, distant supervision methods don't require any labeled data, but make use of weakly labeled data, such as data extracted from a knowledge base. Distant supervision has been applied to relation extraction BIBREF19 , extraction of gene interactions BIBREF20 , PPI extraction BIBREF21 , BIBREF22 , and identification of PICO elements BIBREF23 . The advantage of our approach compared to the distantly supervised methods is that it does not require any underlying knowledge base or a similar source of data."
],
[
"In this paper we presented a method for unsupervised identification of text segments relevant to specific sought after information being extracted from scientific documents. Our method is entirely unsupervised and only requires the current document itself and the input descriptions instead of corpus linked to this document. The method utilizes short descriptions of the information being extracted from the documents and the ability of word embeddings to capture word context. Consequently, it is domain independent and can potentially be applied to another set of documents and criteria with minimal effort. We have used the method on a corpus of toxicology documents and a set of guideline protocol criteria needed to be extracted from the documents. We have shown the identified text segments are very accurate. Furthermore, a binary classifier trained to identify publications that met the criteria performed better when trained on the candidate sentences than when trained on sentences randomly picked from the text, supporting our intuition that our method is able to accurately identify relevant text segments from full text documents.",
"There are a number of things we plan on investigating next. In our initial experiment we have utilized criteria descriptions which were not designed to be used by our model. One possible improvement of our method could be replacing the current descriptions with example sentences taken from the documents containing the sought after information. We also plan on testing our method on an annotated dataset, for example using existing annotated PICO element datasets BIBREF24 ."
],
[
"This section provides additional details and results. Figures FIGREF19 , FIGREF20 , and FIGREF21 show example annotations generated for criteria MC 4, MC 5, and MC 6."
]
],
"section_name": [
"Acknowledgments",
"Introduction",
"The Task and the Data",
"Task Description",
"The Dataset",
"Approach",
"Example Results",
"Evaluation",
"Results analysis",
"Related Work",
"Conclusions and Future Work",
"Supplemental Material"
]
} | {
"answers": [
{
"annotation_id": [
"53f59ff4d1ae66e7e37b7c3c0e6fb990e6c2c963",
"a6abd170cca8053c1bcca8717e9a38ab67803805"
],
"answer": [
{
"evidence": [
"The remainder of this paper is organized as follows. In the following section we provide more details of the task and the dataset used in this study. In Section SECREF3 we describe our approach. In Section SECREF4 we evaluate our model and discuss our results. In Section SECREF5 we compare our work to existing approaches. Finally, in Section SECREF6 we provide ideas for further study."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In Section SECREF5 we compare our work to existing approaches."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The goal of this experiment was to explore empirically whether our approach truly identifies mentions of the minimum criteria in text. As we did not have any fine-grained annotations that could be used to directly evaluate whether our model identifies the correct sequences, we have used a different methodology. We have utilized the existing 0/1 labels which were available in the database (these were discussed in Section SECREF2 ) to train one binary classifier for each MC. The task of each of the classifiers is to determine whether a publication met the given criteria or not. We have then compared a baseline classifier trained on all full text with three other models:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We have then compared a baseline classifier trained on all full text with three other models:"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"18d560a38ba79c7b08e5262cf54f5e3a3306f8f5",
"3d9dec90a3ac88481fe414aa0dd2d3f6c4094d0c"
],
"answer": [
{
"evidence": [
"Significant efforts in toxicology research are being devoted towards developing new in vitro methods for testing chemicals due to the large number of untested chemicals in use ( INLINEFORM0 75,000-80,000 BIBREF8 , BIBREF1 ) and the cost and time required by existing in vivo methods (2-3 years and millions of dollars per chemical BIBREF8 ). To facilitate the development of novel in vitro methods and assess the adherence to existing study guidelines, a curated database of high-quality in vivo rodent uterotrophic bioassay data extracted from research publications has recently been developed and published BIBREF1 ."
],
"extractive_spans": [
"a curated database of high-quality in vivo rodent uterotrophic bioassay data"
],
"free_form_answer": "",
"highlighted_evidence": [
"To facilitate the development of novel in vitro methods and assess the adherence to existing study guidelines, a curated database of high-quality in vivo rodent uterotrophic bioassay data extracted from research publications has recently been developed and published BIBREF1 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The version of the database which contains both GL and non-GL studies consists of 670 publications (spanning the years 1938 through 2014) with results from 2,615 uterotrophic bioassays. Specifically, each entry in the database describes one study, and studies are linked to publications using PubMed reference numbers (PMIDs). Each study is assigned seven 0/1 labels – one for each of the minimum criteria and one for the overall GL/non-GL label. The database also contains more detailed subcategories for each label (for example “species” label for MC 1) which were not used in this study. The publication PDFs were provided to us by the database creators. We have used the Grobid library to convert the PDF files into structured text. After removing documents with missing PDF files and documents which were not converted successfully, we were left with 624 full text documents."
],
"extractive_spans": [
"GL and non-GL studies consists of 670 publications (spanning the years 1938 through 2014) with results from 2,615 uterotrophic bioassays"
],
"free_form_answer": "",
"highlighted_evidence": [
"The version of the database which contains both GL and non-GL studies consists of 670 publications (spanning the years 1938 through 2014) with results from 2,615 uterotrophic bioassays. Specifically, each entry in the database describes one study, and studies are linked to publications using PubMed reference numbers (PMIDs)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e1fab461b51bebaa599cd5315bdf3ea124128eda",
"e9b61c0a21f80974322a1481f107712abaa37a1c"
],
"answer": [
{
"evidence": [
"To avoid selecting the same sentences across the three models we removed documents which contained less than INLINEFORM0 sentences (Table TABREF17 , row Number of documents shows how many documents satisfied this condition). In all of the experiments presented in this section, the publication full text was tokenized, lower-cased, stemmed, and stop words were removed. All models used a Bernoulli Naïve Bayes classifier (scikit-learn implementation which used a uniform class prior) trained on binary occurrence matrices created using 1-3-grams extracted from the publications, with n-grams appearing in only one document removed. The complete results obtained from leave-one-out cross validation are shown in Table TABREF17 . In all cases we report classification accuracy. In the case of the random-k sentences model the accuracy was averaged over 10 runs of the model."
],
"extractive_spans": [
"Bernoulli Naïve Bayes classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"All models used a Bernoulli Naïve Bayes classifier (scikit-learn implementation which used a uniform class prior) trained on binary occurrence matrices created using 1-3-grams extracted from the publications, with n-grams appearing in only one document removed. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To avoid selecting the same sentences across the three models we removed documents which contained less than INLINEFORM0 sentences (Table TABREF17 , row Number of documents shows how many documents satisfied this condition). In all of the experiments presented in this section, the publication full text was tokenized, lower-cased, stemmed, and stop words were removed. All models used a Bernoulli Naïve Bayes classifier (scikit-learn implementation which used a uniform class prior) trained on binary occurrence matrices created using 1-3-grams extracted from the publications, with n-grams appearing in only one document removed. The complete results obtained from leave-one-out cross validation are shown in Table TABREF17 . In all cases we report classification accuracy. In the case of the random-k sentences model the accuracy was averaged over 10 runs of the model."
],
"extractive_spans": [
"Bernoulli Naïve Bayes classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"All models used a Bernoulli Naïve Bayes classifier (scikit-learn implementation which used a uniform class prior) trained on binary occurrence matrices created using 1-3-grams extracted from the publications, with n-grams appearing in only one document removed."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0406595b73d9f5f4cc6f6b1ce84489ab6781643c",
"78b2688883324dba910a289ce5eaf872f18c62d8"
],
"answer": [
{
"evidence": [
"The version of the database which contains both GL and non-GL studies consists of 670 publications (spanning the years 1938 through 2014) with results from 2,615 uterotrophic bioassays. Specifically, each entry in the database describes one study, and studies are linked to publications using PubMed reference numbers (PMIDs). Each study is assigned seven 0/1 labels – one for each of the minimum criteria and one for the overall GL/non-GL label. The database also contains more detailed subcategories for each label (for example “species” label for MC 1) which were not used in this study. The publication PDFs were provided to us by the database creators. We have used the Grobid library to convert the PDF files into structured text. After removing documents with missing PDF files and documents which were not converted successfully, we were left with 624 full text documents."
],
"extractive_spans": [
"670"
],
"free_form_answer": "",
"highlighted_evidence": [
"The version of the database which contains both GL and non-GL studies consists of 670 publications (spanning the years 1938 through 2014) with results from 2,615 uterotrophic bioassays. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The version of the database which contains both GL and non-GL studies consists of 670 publications (spanning the years 1938 through 2014) with results from 2,615 uterotrophic bioassays. Specifically, each entry in the database describes one study, and studies are linked to publications using PubMed reference numbers (PMIDs). Each study is assigned seven 0/1 labels – one for each of the minimum criteria and one for the overall GL/non-GL label. The database also contains more detailed subcategories for each label (for example “species” label for MC 1) which were not used in this study. The publication PDFs were provided to us by the database creators. We have used the Grobid library to convert the PDF files into structured text. After removing documents with missing PDF files and documents which were not converted successfully, we were left with 624 full text documents."
],
"extractive_spans": [
"670 publications"
],
"free_form_answer": "",
"highlighted_evidence": [
"The version of the database which contains both GL and non-GL studies consists of 670 publications (spanning the years 1938 through 2014) with results from 2,615 uterotrophic bioassays."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0cbb0c57958ee9d8f1e61ea0c737170db1a58971",
"6e353f02c81ba0ac2edf44dc46c6bfd8518ac293"
],
"answer": [
{
"evidence": [
"Extracting data elements such as study descriptors from publication full texts is an essential step in a number of tasks including systematic review preparation BIBREF0 , construction of reference databases BIBREF1 , and knowledge discovery BIBREF2 . These tasks typically involve domain experts identifying relevant literature pertaining to a specific research question or a topic being investigated, identifying passages in the retrieved articles that discuss the sought after information, and extracting structured data from these passages. The extracted data is then analyzed, for example to assess adherence to existing guidelines BIBREF1 . Figure FIGREF2 shows an example text excerpt with information relevant to a specific task (assessment of adherence to existing guidelines BIBREF1 ) highlighted."
],
"extractive_spans": [],
"free_form_answer": "Study descriptor is a set of structured data elements extracted from a publication text that contains specific expert knowledge pertaining to domain topics.",
"highlighted_evidence": [
"QUESTION (5 / 5): WHAT IS A STUDY DESCRIPTOR?",
"Extracting data elements such as study descriptors from publication full texts is an essential step in a number of tasks including systematic review preparation BIBREF0 , construction of reference databases BIBREF1 , and knowledge discovery BIBREF2 . These tasks typically involve domain experts identifying relevant literature pertaining to a specific research question or a topic being investigated, identifying passages in the retrieved articles that discuss the sought after information, and extracting structured data from these passages. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they compare to previous work?",
"What is the source of their data?",
"What is their binary classifier?",
"How long is their dataset?",
"What is a study descriptor?"
],
"question_id": [
"5499440674f0e4a9d6912b9ac29fa1f7b7cd5253",
"de313b5061fc22e8ffef1706445728de298eae31",
"47b7bc232af7bf93338bd3926345e23e9e80c0c1",
"0b5c599195973c563c4b1a0fe5d8fc77204d71a0",
"1397b1c51f722a4ee2b6c64dc9fc6afc8bd3e880"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Text excerpt from a reference database of rodent uterotrophic bioassay publications (Kleinstreuer et al., 2016). The text in this example was manually annotated by one of the authors to highlight information relevant to guidelines for performing uterotrophic bioassays set forth by (OECD, 2007).",
"Table 1: Minimum criteria for guideline-like studies. The descriptions are reprinted here from (Kleinstreuer et al., 2016).",
"Table 2: Label statistics. Column 0 shows number of publications per MC which did not meet the criteria and column 1 shows number of publications which met the criteria. The last column in the table shows proportion of positive (i.e. criteria met) labels.",
"Figure 2: High level overview of our approach. The dotted line represents an optional step of finding smaller sub-segments within the candidate segments. For example, in our case, we first retrieve the most similar sentences and in the second step find most similar continuous 5-grams found withing those sentences.",
"Table 3: Evaluation results.",
"Figure 5: Annotations generated using our method for “MC 3: Route of administration”. The highlighting used is the same as in Figure 3."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Figure2-1.png",
"8-Table3-1.png",
"8-Figure5-1.png"
]
} | [
"What is a study descriptor?"
] | [
[
"1811.01183-Introduction-0"
]
] | [
"Study descriptor is a set of structured data elements extracted from a publication text that contains specific expert knowledge pertaining to domain topics."
] | 234 |
1910.09387 | Clotho: An Audio Captioning Dataset | Audio captioning is the novel task of general audio content description using free text. It is an intermodal translation task (not speech-to-text), where a system accepts as an input an audio signal and outputs the textual description (i.e. the caption) of that signal. In this paper we present Clotho, a dataset for audio captioning consisting of 4981 audio samples of 15 to 30 seconds duration and 24 905 captions of eight to 20 words length, and a baseline method to provide initial results. Clotho is built with focus on audio content and caption diversity, and the splits of the data are not hampering the training or evaluation of methods. All sounds are from the Freesound platform, and captions are crowdsourced using Amazon Mechanical Turk and annotators from English speaking countries. Unique words, named entities, and speech transcription are removed with post-processing. Clotho is freely available online (this https URL). | {
"paragraphs": [
[
"Captioning is the intermodal translation task of describing the human-perceived information in a medium, e.g. images (image captioning) or audio (audio captioning), using free text BIBREF0, BIBREF1, BIBREF2, BIBREF3. In particular, audio captioning was first introduced in BIBREF3, it does not involve speech transcription, and is focusing on identifying the human-perceived information in an general audio signal and expressing it through text, using natural language. This information includes identification of sound events, acoustic scenes, spatiotemporal relationships of sources, foreground versus background discrimination, concepts, and physical properties of objects and environment. For example, given an audio signal, an audio captioning system would be able to generate captions like “a door creaks as it slowly revolves back and forth”.",
"The dataset used for training an audio captioning method defines to a great extent what the method can learn BIBREF0, BIBREF4. Diversity in captions allows the method to learn and exploit the perceptual differences on the content (e.g. a thin plastic rattling could be perceived as a fire crackling) BIBREF0. Also, the evaluation of the method becomes more objective and general by having more captions per audio signal BIBREF4.",
"Recently, two different datasets for audio captioning were presented, Audio Caption and AudioCaps BIBREF5, BIBREF6. Audio Caption is partially released, and contains 3710 domain-specific (hospital) video clips with their audio tracks, and annotations that were originally obtained in Mandarin Chinese and afterwards translated to English using machine translation BIBREF5. The annotators had access and viewed the videos. The annotations contain description of the speech content (e.g. “The patient inquired about the location of the doctor’s police station”). AudioCaps dataset has 46 000 audio samples from AudioSet BIBREF7, annotated with one caption each using the crowdsourcing platform Amazon Mechanical Turk (AMT) and automated quality and location control of the annotators BIBREF6. Authors of AudioCaps did not use categories of sounds which they claimed that visuals were required for correct recognition, e.g. “inside small room”. Annotators of AudioCaps were provided the word labels (by AudioSet) and viewed the accompanying videos of the audio samples.",
"The perceptual ambiguity of sounds can be hampered by providing contextual information (e.g. word labels) to annotators, making them aware of the actual source and not letting them describe their own perceived information. Using visual stimuli (e.g. video) introduces a bias, since annotators may describe what they see and not what they hear. Also, a single caption per file impedes the learning and evaluation of diverse descriptions of information, and domain-specific data of previous audio captioning datasets have an observed significant impact on the performance of methods BIBREF5. Finally, unique words (i.e. words appearing only once) affect the learning process, as they have an impact on the evaluation process (e.g. if a word is unique, will be either on training or on evaluation). An audio captioning dataset should at least provide some information on unique words contained in its captions.",
"In this paper we present the freely available audio captioning dataset Clotho, with 4981 audio samples and 24 905 captions. All audio samples are from Freesound platform BIBREF8, and are of duration from 15 to 30 seconds. Each audio sample has five captions of eight to 20 words length, collected by AMT and a specific protocol for crowdsourcing audio annotations, which ensures diversity and reduced grammatical errors BIBREF0. During annotation no other information but the audio signal was available to the annotators, e.g. video or word tags. The rest of the paper is organized as follows. Section SECREF2 presents the creation of Clotho, i.e. gathering and processing of the audio samples and captions, and the splitting of the data to development, evaluation, and testing splits. Section SECREF3 presents the baseline method used, the process followed for its evaluation using Clotho, and the obtained results. Section SECREF4 concludes the paper."
],
[
"We collect the set of audio samples $\\mathbb {X}_{\\text{init}}=\\lbrace \\mathbf {x}_{\\text{init}}^{i}\\rbrace _{i=1}^{N_{\\text{init}}}$, with $N_{\\text{init}}=12000$ and their corresponding metadata (e.g. tags that indicate their content, and a short textual description), from the online platform Freesound BIBREF8. $\\mathbf {x}_{\\text{init}}$ was obtained by randomly sampling audio files from Freesound fulfilling the following criteria: lossless file type, audio quality at least 44.1 kHz and 16-bit, duration $10\\text{ s}\\le d({\\mathbf {x}_{\\text{init}}^{i}})\\le 300$ s (where $d(\\mathbf {x})$ is the duration of $\\mathbf {x}$), a textual description which first sentence does not have spelling errors according to US and UK English dictionaries (as an indication of the correctness of the metadata, e.g. tags), and not having tags that indicate music, sound effects, or speech. As tags indicating speech files we consider those like “speech”, “speak”, and “woman”. We normalize $\\mathbf {x}^{i}_{\\text{init}}$ to the range $[-1, 1]$, trim the silence (60 dB below the maximum amplitude) from the beginning and end, and resample to 44.1 kHz. Finally, we keep samples that are longer than 15 s as a result of the processing. This results in $\\mathbb {X}^{\\prime }_{\\text{init}}=\\lbrace \\mathbf {x}_{\\text{init}}^{j}\\rbrace _{j=1}^{N^{\\prime }_{\\text{init}}},\\,N^{\\prime }_{\\text{init}}=9000$.",
"For enhancing the diversity of the audio content, we aim to create $\\mathbb {X}_{\\text{med}}\\subset \\mathbb {X}^{\\prime }_{\\text{init}}$ based on the tags of $\\mathbb {X}^{\\prime }_{\\text{init}}$, targeting to the most uniform possible distribution of the tags of the audio samples in $\\mathbb {X}_{\\text{med}}$. We first create the bag of tags $\\mathbb {T}$ by collecting all the tags of sounds in $\\mathbb {X}^{\\prime }_{\\text{init}}$. We omit tags that describe time or recording equipment and process (e.g. “autumn”, “field-recording”). Then, we calculate the normalized frequency of all tags in $\\mathbb {T}$ and create $\\mathbb {T}_{\\text{0.01}}\\subset \\mathbb {T}$, with tags of a normalized frequency of at least 0.01. We randomly sample $10^6$ sets (with overlap) of $N_{\\text{med}}=5000$ files from $\\mathbb {X}^{\\prime }_{\\text{init}}$, and keep the set that has the maximum entropy for $\\mathbb {T}_{\\text{0.01}}$. This process results in $\\mathbb {X}_{\\text{med}}=\\lbrace \\mathbf {x}_{\\text{init}}^{z}\\rbrace _{z=1}^{N_{\\text{med}}}$, having the most uniform tag distribution and, hence, the most diverse content. The resulting distribution of the tags in $\\mathbb {T}_{\\text{0.01}}$ is illustrated in Figure FIGREF5. The 10 most common tags are: ambient, water, nature, birds, noise, rain, city, wind, metal, and people.",
"We target at audio samples $\\mathbf {x}$ having a uniform distribution between 15 and 30 s. Thus, we further process $\\mathbb {X}_{\\text{med}}$, keeping the files with a maximum duration of 30 s and cutting a segment from the rest. We randomly select a set of values for the duration of the segments that will maximize the entropy of the duration of the files, discretizing the durations with a resolution of 0.05 s. In order to not pick segment without activity, we sample the files by taking a window with a selected duration that maximizes the energy of the sample. Finally, we apply a 512-point Hamming window to the beginning and the end of the samples, smoothing the effect of sampling. The above process results to $\\mathbb {X}_{\\text{sam}}=\\lbrace \\mathbf {x}_{\\text{sam}}^{z}\\rbrace _{z=1}^{N_{\\text{med}}}$, where the distribution of durations is approximately uniform between 15 and 30 s."
],
[
"We use AMT and a novel three-step based framework BIBREF0 for crowdsourcing the annotation of $\\mathbb {X}_{\\text{sam}}$, acquiring the set of captions $\\mathbb {C}_{\\text{sam}}^{z}=\\lbrace c_{\\text{sam}}^{z,u}\\rbrace _{u=1}^{N_{\\text{cp}}}$ for each $\\mathbf {x}_{\\text{sam}}^{z}$, where $c_{\\text{sam}}^{z,u}$ is an eight to 20 words long caption for $\\mathbf {x}_{\\text{sam}}^{z}$. In a nutshell, each audio sample $\\mathbf {x}_{\\text{sam}}^{z}$ gets annotated by $N_{\\text{cp}}$ different annotators in the first step of the framework. The annotators have access only to $\\mathbf {x}_{\\text{sam}}^{z}$ and not any other information. In the second step, different annotators are instructed to correct any grammatical errors, typos, and/or rephrase the captions. This process results in $2\\times N_{\\text{cp}}$ captions per $\\mathbf {x}_{\\text{sam}}^{z}$. Finally, three (again different) annotators have access to $\\mathbf {x}_{\\text{sam}}^{z}$ and its $2\\times N_{\\text{cp}}$ captions, and score each caption in terms of the accuracy of the description and fluency of English, using a scale from 1 to 4 (the higher the better). The captions for each $\\mathbf {x}_{\\text{sam}}^{z}$ are sorted (first according to accuracy of description and then according to fluency), and two groups are formed: the top $N_{\\text{cp}}$ and the bottom $N_{\\text{cp}}$ captions. The top $N_{\\text{cp}}$ captions are selected as $\\mathbb {C}_{\\text{sam}}^{z}$. We manually sanitize further $\\mathbb {C}_{\\text{sam}}^{z}$, e.g. by replacing “it's” with “it is” or “its”, making consistent hyphenation and compound words (e.g. “nonstop”, “non-stop”, and “non stop”), removing words or rephrasing captions pertaining to the content of speech (e.g. “French”, “foreign”), and removing/replacing named entities (e.g. “Windex”).",
"Finally, we observe that some captions include transcription of speech. To remove it, we employ extra annotators (not from AMT) which had access only at the captions. We instruct the annotators to remove the transcribed speech and rephrase the caption. If the result is less than eight words, we check the bottom $N_{\\text{cp}}$ captions for that audio sample. If they include a caption that has been rated with at least 3 by all the annotators for both accuracy and fluency, and does not contain transcribed speech, we use that caption. Otherwise, we remove completely the audio sample. This process yields the final set of audio samples and captions, $\\mathbb {X}=\\lbrace \\mathbf {x}^{o}\\rbrace _{o=1}^{N}$ and $\\mathbb {C}^{\\prime }=\\lbrace \\mathbb {C}^{\\prime o}\\rbrace _{o=1}^{N}$, respectively, with $\\mathbb {C}^{\\prime o}=\\lbrace c^{\\prime o,u}\\rbrace _{u=1}^{N_{\\text{cp}}}$ and $N=4981$.",
"An audio sample should belong to only one split of data (e.g., training, development, testing). This means that if a word appears only at the captions of one $\\mathbf {x}^{o}$, then this word will be appearing only at one of the splits. Having a word appearing only in training split leads to sub-optimal learning procedure, because resources are spend to words unused in validation and testing. If a word is not appearing in the training split, then the evaluation procedure suffers by having to evaluate on words not known during training. For that reason, for each $\\mathbf {x}^{o}$ we construct the set of words $\\mathbb {S}_{a}^{o}$ from $\\mathbb {C}^{\\prime o}$. Then, we merge all $\\mathbb {S}_{a}^{o}$ to the bag $\\mathbb {S}_{T}$ and we identify all words that appear only once (i.e. having a frequency of one) in $\\mathbb {S}_{T}$. We employ an extra annotator (not from AMT) which has access only to the captions of $\\mathbf {x}^{o}$, and has the instructions to change the all words in $\\mathbb {S}_{T}$ with frequency of one, with other synonym words in $\\mathbb {S}_{T}$ and (if necessary) rephrase the caption. The result is the set of captions $\\mathbb {C}=\\lbrace \\mathbb {C}^{o}\\rbrace _{o=1}^{N}$, with words in $\\mathbb {S}_{T}$ having a frequency of at least two. Each word will appear in the development set and at least in one of the evaluation or testing splits. This process yields the data of the Clotho dataset, $\\mathbb {D}=\\lbrace \\left<\\mathbf {x}^{o}, \\mathbb {C}^{o}\\right>\\rbrace _{o=1}^{N}$."
],
[
"We split $\\mathbb {D}$ in three non-overlapping splits of 60%-20%-20%, termed as development, evaluation, and testing, respectively. Every word in the captions of $\\mathbb {D}$ appears at the development split and at least in one of the other two splits.",
"For each $\\mathbf {x}^{o}$ we construct the set of unique words $\\mathbb {S}^{o}$ from its captions $\\mathbb {C}^{o}$, using all letters in small-case and excluding punctuation. We merge all $\\mathbb {S}^{o}$ to the bag $\\mathbb {S}_{\\text{bag}}$ and calculate the frequency $f_{w}$ of each word $w$. We use multi-label stratification BIBREF9, having as labels for each $\\mathbf {x}^{o}$ the corresponding words $\\mathbb {S}^{o}$, and split $\\mathbb {D}$ 2000 times in sets of splits of 60%-40%, where 60% corresponds to the development split. We reject the sets of splits that have at least one word appearing only in one of the splits. Ideally, the words with $f_{w}=2$ should appear only once in the development split. The other appearance of word should be in the evaluation or testing splits. This will prevent having unused words in the training (i.e. words appearing only in the development split) or unknown words in the evaluation/testing process (i.e. words not appearing in the development split). The words with $f_{w}\\ge 3$ should appear $f^{\\text{Dev}}_{w}=\\lfloor 0.6f_{w}\\rfloor $ times in the development split, where 0.6 is the percentage of data in the development split and $\\lfloor \\ldots \\rfloor $ is the floor function. We calculate the frequency of words in the development split, $f^{\\text{d}}_{w}$, and observe that it is impossible to satisfy the $f^{\\text{d}}_{w}=f^{\\text{Dev}}_{w}$ for the words with $f_{w}\\ge 3$. Therefore, we adopted a tolerance $\\delta _{w}$ (i.e. a deviation) to the $f^{\\text{Dev}}_{w}$, used as $f^{\\text{Dev}}_{w}\\pm \\delta _{w}$:",
"The tolerance means, for example, that we can tolerate a word appearing a total of 3 times in the whole Clotho dataset $\\mathbb {D}$, to appear 2 times in the development split (appearing 0 times in development split results in the rejection of the split set). This will result to this word appearing in either evaluation or testing split, but still this word will not appear only in one split. To pick the best set of splits, we count the amount of words that have a frequency $f^{\\text{d}}_{w}\\notin [f^{\\text{Dev}}_{w}-\\delta _{w},f^{\\text{Dev}}_{w}+\\delta _{w}]$. We score, in an ascending fashion, the sets of splits according to that amount of words and we pick the top 50 ones. For each of the 50 sets of splits, we further separate the 40% split to 20% and 20%, 1000 times. That is, we end up with 50 000 sets of splits of 60%, 20%, 20%, corresponding to development, evaluation, and testing splits, respectively. We want to score each of these sets of splits, in order to select the split with the smallest amount of words that deviate from the ideal split for each of these 50 000 sets of splits. We calculate the frequency of appearance of each word in the development, evaluation, and testing splits, $f^{\\text{d}}_{w}$, $f^{\\text{e}}_{w}$, and $f^{\\text{t}}_{w}$, respectively. Then, we create the sets of words $\\Psi _{d}$, $\\Psi _{e}$, and $\\Psi _{t}$, having the words with $f^{\\text{d}}_{w} \\notin [f^{\\text{Dev}}_{w}- \\delta _{w},f^{\\text{Dev}}_{w}+\\delta _{w}]$, $f^{\\text{e}}_{w} \\notin [f^{\\text{Ev}}_{w}- \\delta _{w},f^{\\text{Ev}}_{w}+\\delta _{w}]$, and $f^{\\text{t}}_{w} \\notin [f^{\\text{Ev}}_{w}- \\delta _{w},f^{\\text{Ev}}_{w}+\\delta _{w}]$, respectively, where $f^{\\text{Ev}}_{w} = f_{w} - f^{\\text{Dev}}_{w}$. Finally, we calculate the sum of the weighted distance of frequencies of words from the $f^{\\text{Dev}}_{w}\\pm \\delta _{w}$ or $f^{\\text{Ev}}_{w}\\pm \\delta _{w}$ range (for words being in the development split or not, respectively), $\\Gamma $, as",
"where $\\alpha _{d}=1/f^{\\text{Dev}}_{w}$ and $\\alpha _{e}=1/0.5f^{\\text{Ev}}_{w}$. We sort all 50 000 sets of splits according to $\\Gamma $ and in ascending fashion, and we pick the top one. This set of splits is the final split for the Clotho dataset, containing 2893 audio samples and 14465 captions in development split, 1045 audio samples and 5225 captions in evaluation split, and 1043 audio samples and 5215 captions in the testing split. The development and evaluation splits are freely available online2. The testing split is withheld for potential usage in scientific challenges. A fully detailed description of the Clotho dataset can be found online. In Figure FIGREF12 is a histogram of the percentage of words ($f^{d}_{w}/f_{w}$, $f^{e}_{w}/f_{w}$, and $f^{t}_{w}/f_{w}$) in the three different splits."
],
[
"In order to provide an example of how to employ Clotho and some initial (baseline) results, we use a previously utilized method for audio captioning BIBREF3 which is based on an encoder-decoder scheme with attention. The method accepts as an input a length-$T$ sequence of 64 log mel-band energies $\\mathbf {X}\\in \\mathbb {R}^{T\\times 64}$, which is used as an input to a DNN which outputs a probability distribution of words. The generated caption is constructed from the output of the DNN, as in BIBREF3. We optimize the parameters of the method using the development split of Clotho and we evaluate it using the evaluation and the testing splits, separately.",
"We first extract 64 log mel-band energies, using a Hamming window of 46 ms, with 50% overlap. We tokenize the captions of the development split, using a one-hot encoding of the words. Since all the words in in the development split appear in the other two splits as well, there are no unknown tokens/words. We also employ the start- and end-of-sequence tokens ($\\left<\\text{SOS}\\right>$ and $\\left<\\text{EOS}\\right>$ respectively), in order to signify the start and end of a caption.",
"The encoder is a series of bi-directional gated recurrent units (bi-GRUs) BIBREF10, similarly to BIBREF3. The output dimensionality for the GRU layers (forward and backward GRUs have same dimensionality) is $\\lbrace 256, 256, 256\\rbrace $. The output of the encoder is processed by an attention mechanism and its output is given as an input to the decoder. The attention mechanism is a feed-forward neural network (FNN) and the decoder a GRU. Then, the output of the decoder is given as an input to another FNN with a softmax non-linearity, which acts as a classifier and outputs the probability distribution of words for the $i$-th time-step. To optimize the parameters of the employed method, we use five times each audio sample, using its five different captions as targeted outputs each time. We optimize jointly the parameters of the encoder, attention mechanism, decoder, and the classifier, using 150 epochs, the cross entropy loss, and Adam optimizer BIBREF11 with proposed hyper-parameters. Also, in each batch we pad the captions of the batch to the longest in the same batch, using the end-of-sequence token, and the input audio features to the longest ones, by prepending zeros.",
"We assess the performance of the method on evaluation and testing splits, using the machine translation metrics BLEU$n$ (with $n=1,\\ldots ,4$), METEOR, CIDEr, and ROUGEL for comparing the output of the method and the reference captions for the input audio sample. In a nutshell, BLEU$n$ measures a modified precision of $n$-grams (e.g. BLEU2 for 2-grams), METEOR measures a harmonic mean-based score of the precision and recall for unigrams, CIDEr measures a weighted cosine similarity of $n$-grams, and ROUGEL is a longest common subsequence-based score.",
"In Table TABREF13 are the scores of the employed metrics for the evaluation and testing splits.",
"As can be seen from Table TABREF13 and BLEU1, the method has started identifying the content of the audio samples by outputting words that exist in the reference captions. For example, the method outputs “water is running into a container into a”, while the closest reference caption is “water pouring into a container with water in it already”, or “birds are of chirping the chirping and various chirping” while the closest reference is “several different kinds of birds are chirping and singing”. The scores of the rest metrics reveal that the structure of the sentence and order of the words are not correct. These are issues that can be tackled by adopting either a pre-calculated or jointly learnt language model. In any case, the results show that the Clotho dataset can effectively be used for research on audio captioning, posing useful data in tackling the challenging task of audio content description."
],
[
"In this work we present a novel dataset for audio captioning, named Clotho, that contains 4981 audio samples and five captions for each file (totaling to 24 905 captions). During the creating of Clotho care has been taken in order to promote diversity of captions, eliminate words that appear only once and named entities, and provide data splits that do not hamper the training or evaluation process. Also, there is an example of the usage of Clotho, using a method proposed at the original work of audio captioning. The baseline results indicate that the baseline method started learning the content of the input audio, but more tuning is needed in order to express the content properly. Future work includes the employment of Clotho and development of novel methods for audio captioning."
],
[
"The research leading to these results has received funding from the European Research Council under the European Union’s H2020 Framework Programme through ERC Grant Agreement 637422 EVERYSOUND. Part of the computations leading to these results were performed on a TITAN-X GPU donated by NVIDIA to K. Drossos. The authors also wish to acknowledge CSC-IT Center for Science, Finland, for computational resources."
]
],
"section_name": [
"Introduction",
"Creation of Clotho dataset ::: Audio data collection and processing",
"Creation of Clotho dataset ::: Captions collection and processing",
"Creation of Clotho dataset ::: Data splitting",
"Baseline method and evaluation",
"Conclusions",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"106c8e712e44af851fed041e761b0f501928330c",
"c8d8f7abc9b0b29bbd2ed347aa7196427dbdc6c0"
],
"answer": [
{
"evidence": [
"We collect the set of audio samples $\\mathbb {X}_{\\text{init}}=\\lbrace \\mathbf {x}_{\\text{init}}^{i}\\rbrace _{i=1}^{N_{\\text{init}}}$, with $N_{\\text{init}}=12000$ and their corresponding metadata (e.g. tags that indicate their content, and a short textual description), from the online platform Freesound BIBREF8. $\\mathbf {x}_{\\text{init}}$ was obtained by randomly sampling audio files from Freesound fulfilling the following criteria: lossless file type, audio quality at least 44.1 kHz and 16-bit, duration $10\\text{ s}\\le d({\\mathbf {x}_{\\text{init}}^{i}})\\le 300$ s (where $d(\\mathbf {x})$ is the duration of $\\mathbf {x}$), a textual description which first sentence does not have spelling errors according to US and UK English dictionaries (as an indication of the correctness of the metadata, e.g. tags), and not having tags that indicate music, sound effects, or speech. As tags indicating speech files we consider those like “speech”, “speak”, and “woman”. We normalize $\\mathbf {x}^{i}_{\\text{init}}$ to the range $[-1, 1]$, trim the silence (60 dB below the maximum amplitude) from the beginning and end, and resample to 44.1 kHz. Finally, we keep samples that are longer than 15 s as a result of the processing. This results in $\\mathbb {X}^{\\prime }_{\\text{init}}=\\lbrace \\mathbf {x}_{\\text{init}}^{j}\\rbrace _{j=1}^{N^{\\prime }_{\\text{init}}},\\,N^{\\prime }_{\\text{init}}=9000$."
],
"extractive_spans": [
"“speech”, “speak”, and “woman”"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collect the set of audio samples $\\mathbb {X}_{\\text{init}}=\\lbrace \\mathbf {x}_{\\text{init}}^{i}\\rbrace _{i=1}^{N_{\\text{init}}}$, with $N_{\\text{init}}=12000$ and their corresponding metadata (e.g. tags that indicate their content, and a short textual description), from the online platform Freesound BIBREF8. $\\mathbf {x}_{\\text{init}}$ was obtained by randomly sampling audio files from Freesound fulfilling the following criteria: lossless file type, audio quality at least 44.1 kHz and 16-bit, duration $10\\text{ s}\\le d({\\mathbf {x}_{\\text{init}}^{i}})\\le 300$ s (where $d(\\mathbf {x})$ is the duration of $\\mathbf {x}$), a textual description which first sentence does not have spelling errors according to US and UK English dictionaries (as an indication of the correctness of the metadata, e.g. tags), and not having tags that indicate music, sound effects, or speech. As tags indicating speech files we consider those like “speech”, “speak”, and “woman”."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collect the set of audio samples $\\mathbb {X}_{\\text{init}}=\\lbrace \\mathbf {x}_{\\text{init}}^{i}\\rbrace _{i=1}^{N_{\\text{init}}}$, with $N_{\\text{init}}=12000$ and their corresponding metadata (e.g. tags that indicate their content, and a short textual description), from the online platform Freesound BIBREF8. $\\mathbf {x}_{\\text{init}}$ was obtained by randomly sampling audio files from Freesound fulfilling the following criteria: lossless file type, audio quality at least 44.1 kHz and 16-bit, duration $10\\text{ s}\\le d({\\mathbf {x}_{\\text{init}}^{i}})\\le 300$ s (where $d(\\mathbf {x})$ is the duration of $\\mathbf {x}$), a textual description which first sentence does not have spelling errors according to US and UK English dictionaries (as an indication of the correctness of the metadata, e.g. tags), and not having tags that indicate music, sound effects, or speech. As tags indicating speech files we consider those like “speech”, “speak”, and “woman”. We normalize $\\mathbf {x}^{i}_{\\text{init}}$ to the range $[-1, 1]$, trim the silence (60 dB below the maximum amplitude) from the beginning and end, and resample to 44.1 kHz. Finally, we keep samples that are longer than 15 s as a result of the processing. This results in $\\mathbb {X}^{\\prime }_{\\text{init}}=\\lbrace \\mathbf {x}_{\\text{init}}^{j}\\rbrace _{j=1}^{N^{\\prime }_{\\text{init}}},\\,N^{\\prime }_{\\text{init}}=9000$."
],
"extractive_spans": [
"from the online platform Freesound BIBREF8"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collect the set of audio samples $\\mathbb {X}_{\\text{init}}=\\lbrace \\mathbf {x}_{\\text{init}}^{i}\\rbrace _{i=1}^{N_{\\text{init}}}$, with $N_{\\text{init}}=12000$ and their corresponding metadata (e.g. tags that indicate their content, and a short textual description), from the online platform Freesound BIBREF8."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b0e27c2c4341753c62489ab2ccdbe62f01a2f2e3",
"b5f45aca356576b492e2fc79412f6e7cbf907882"
],
"answer": [
{
"evidence": [
"We use AMT and a novel three-step based framework BIBREF0 for crowdsourcing the annotation of $\\mathbb {X}_{\\text{sam}}$, acquiring the set of captions $\\mathbb {C}_{\\text{sam}}^{z}=\\lbrace c_{\\text{sam}}^{z,u}\\rbrace _{u=1}^{N_{\\text{cp}}}$ for each $\\mathbf {x}_{\\text{sam}}^{z}$, where $c_{\\text{sam}}^{z,u}$ is an eight to 20 words long caption for $\\mathbf {x}_{\\text{sam}}^{z}$. In a nutshell, each audio sample $\\mathbf {x}_{\\text{sam}}^{z}$ gets annotated by $N_{\\text{cp}}$ different annotators in the first step of the framework. The annotators have access only to $\\mathbf {x}_{\\text{sam}}^{z}$ and not any other information. In the second step, different annotators are instructed to correct any grammatical errors, typos, and/or rephrase the captions. This process results in $2\\times N_{\\text{cp}}$ captions per $\\mathbf {x}_{\\text{sam}}^{z}$. Finally, three (again different) annotators have access to $\\mathbf {x}_{\\text{sam}}^{z}$ and its $2\\times N_{\\text{cp}}$ captions, and score each caption in terms of the accuracy of the description and fluency of English, using a scale from 1 to 4 (the higher the better). The captions for each $\\mathbf {x}_{\\text{sam}}^{z}$ are sorted (first according to accuracy of description and then according to fluency), and two groups are formed: the top $N_{\\text{cp}}$ and the bottom $N_{\\text{cp}}$ captions. The top $N_{\\text{cp}}$ captions are selected as $\\mathbb {C}_{\\text{sam}}^{z}$. We manually sanitize further $\\mathbb {C}_{\\text{sam}}^{z}$, e.g. by replacing “it's” with “it is” or “its”, making consistent hyphenation and compound words (e.g. “nonstop”, “non-stop”, and “non stop”), removing words or rephrasing captions pertaining to the content of speech (e.g. “French”, “foreign”), and removing/replacing named entities (e.g. “Windex”).",
"Finally, we observe that some captions include transcription of speech. To remove it, we employ extra annotators (not from AMT) which had access only at the captions. We instruct the annotators to remove the transcribed speech and rephrase the caption. If the result is less than eight words, we check the bottom $N_{\\text{cp}}$ captions for that audio sample. If they include a caption that has been rated with at least 3 by all the annotators for both accuracy and fluency, and does not contain transcribed speech, we use that caption. Otherwise, we remove completely the audio sample. This process yields the final set of audio samples and captions, $\\mathbb {X}=\\lbrace \\mathbf {x}^{o}\\rbrace _{o=1}^{N}$ and $\\mathbb {C}^{\\prime }=\\lbrace \\mathbb {C}^{\\prime o}\\rbrace _{o=1}^{N}$, respectively, with $\\mathbb {C}^{\\prime o}=\\lbrace c^{\\prime o,u}\\rbrace _{u=1}^{N_{\\text{cp}}}$ and $N=4981$."
],
"extractive_spans": [],
"free_form_answer": "They manually check the captions and employ extra annotators to further revise the annotations.",
"highlighted_evidence": [
"We manually sanitize further $\\mathbb {C}_{\\text{sam}}^{z}$, e.g. by replacing “it's” with “it is” or “its”, making consistent hyphenation and compound words (e.g. “nonstop”, “non-stop”, and “non stop”), removing words or rephrasing captions pertaining to the content of speech (e.g. “French”, “foreign”), and removing/replacing named entities (e.g. “Windex”).\n\nFinally, we observe that some captions include transcription of speech. To remove it, we employ extra annotators (not from AMT) which had access only at the captions. We instruct the annotators to remove the transcribed speech and rephrase the caption. If the result is less than eight words, we check the bottom $N_{\\text{cp}}$ captions for that audio sample. If they include a caption that has been rated with at least 3 by all the annotators for both accuracy and fluency, and does not contain transcribed speech, we use that caption. Otherwise, we remove completely the audio sample."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use AMT and a novel three-step based framework BIBREF0 for crowdsourcing the annotation of $\\mathbb {X}_{\\text{sam}}$, acquiring the set of captions $\\mathbb {C}_{\\text{sam}}^{z}=\\lbrace c_{\\text{sam}}^{z,u}\\rbrace _{u=1}^{N_{\\text{cp}}}$ for each $\\mathbf {x}_{\\text{sam}}^{z}$, where $c_{\\text{sam}}^{z,u}$ is an eight to 20 words long caption for $\\mathbf {x}_{\\text{sam}}^{z}$. In a nutshell, each audio sample $\\mathbf {x}_{\\text{sam}}^{z}$ gets annotated by $N_{\\text{cp}}$ different annotators in the first step of the framework. The annotators have access only to $\\mathbf {x}_{\\text{sam}}^{z}$ and not any other information. In the second step, different annotators are instructed to correct any grammatical errors, typos, and/or rephrase the captions. This process results in $2\\times N_{\\text{cp}}$ captions per $\\mathbf {x}_{\\text{sam}}^{z}$. Finally, three (again different) annotators have access to $\\mathbf {x}_{\\text{sam}}^{z}$ and its $2\\times N_{\\text{cp}}$ captions, and score each caption in terms of the accuracy of the description and fluency of English, using a scale from 1 to 4 (the higher the better). The captions for each $\\mathbf {x}_{\\text{sam}}^{z}$ are sorted (first according to accuracy of description and then according to fluency), and two groups are formed: the top $N_{\\text{cp}}$ and the bottom $N_{\\text{cp}}$ captions. The top $N_{\\text{cp}}$ captions are selected as $\\mathbb {C}_{\\text{sam}}^{z}$. We manually sanitize further $\\mathbb {C}_{\\text{sam}}^{z}$, e.g. by replacing “it's” with “it is” or “its”, making consistent hyphenation and compound words (e.g. “nonstop”, “non-stop”, and “non stop”), removing words or rephrasing captions pertaining to the content of speech (e.g. “French”, “foreign”), and removing/replacing named entities (e.g. “Windex”)."
],
"extractive_spans": [
"different annotators are instructed to correct any grammatical errors",
"score each caption in terms of the accuracy of the description and fluency of English, using a scale from 1 to 4",
"top $N_{\\text{cp}}$ captions are selected"
],
"free_form_answer": "",
"highlighted_evidence": [
"The annotators have access only to $\\mathbf {x}_{\\text{sam}}^{z}$ and not any other information. In the second step, different annotators are instructed to correct any grammatical errors, typos, and/or rephrase the captions. This process results in $2\\times N_{\\text{cp}}$ captions per $\\mathbf {x}_{\\text{sam}}^{z}$. Finally, three (again different) annotators have access to $\\mathbf {x}_{\\text{sam}}^{z}$ and its $2\\times N_{\\text{cp}}$ captions, and score each caption in terms of the accuracy of the description and fluency of English, using a scale from 1 to 4 (the higher the better). The captions for each $\\mathbf {x}_{\\text{sam}}^{z}$ are sorted (first according to accuracy of description and then according to fluency), and two groups are formed: the top $N_{\\text{cp}}$ and the bottom $N_{\\text{cp}}$ captions. The top $N_{\\text{cp}}$ captions are selected as $\\mathbb {C}_{\\text{sam}}^{z}$."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0492dbd085d76226439a531a6fbf5249990741d3",
"72c46d6a134093c522660247f5fd021a7e1fa087"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"06b50457d2ee5535076a88ae0a76ef0e07316654",
"c64052e61d9b0b284581a34d95354be4ebac97f1"
],
"answer": [
{
"evidence": [
"In order to provide an example of how to employ Clotho and some initial (baseline) results, we use a previously utilized method for audio captioning BIBREF3 which is based on an encoder-decoder scheme with attention. The method accepts as an input a length-$T$ sequence of 64 log mel-band energies $\\mathbf {X}\\in \\mathbb {R}^{T\\times 64}$, which is used as an input to a DNN which outputs a probability distribution of words. The generated caption is constructed from the output of the DNN, as in BIBREF3. We optimize the parameters of the method using the development split of Clotho and we evaluate it using the evaluation and the testing splits, separately."
],
"extractive_spans": [
"previously utilized method for audio captioning BIBREF3 which is based on an encoder-decoder scheme with attention"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to provide an example of how to employ Clotho and some initial (baseline) results, we use a previously utilized method for audio captioning BIBREF3 which is based on an encoder-decoder scheme with attention."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to provide an example of how to employ Clotho and some initial (baseline) results, we use a previously utilized method for audio captioning BIBREF3 which is based on an encoder-decoder scheme with attention. The method accepts as an input a length-$T$ sequence of 64 log mel-band energies $\\mathbf {X}\\in \\mathbb {R}^{T\\times 64}$, which is used as an input to a DNN which outputs a probability distribution of words. The generated caption is constructed from the output of the DNN, as in BIBREF3. We optimize the parameters of the method using the development split of Clotho and we evaluate it using the evaluation and the testing splits, separately.",
"We first extract 64 log mel-band energies, using a Hamming window of 46 ms, with 50% overlap. We tokenize the captions of the development split, using a one-hot encoding of the words. Since all the words in in the development split appear in the other two splits as well, there are no unknown tokens/words. We also employ the start- and end-of-sequence tokens ($\\left<\\text{SOS}\\right>$ and $\\left<\\text{EOS}\\right>$ respectively), in order to signify the start and end of a caption.",
"The encoder is a series of bi-directional gated recurrent units (bi-GRUs) BIBREF10, similarly to BIBREF3. The output dimensionality for the GRU layers (forward and backward GRUs have same dimensionality) is $\\lbrace 256, 256, 256\\rbrace $. The output of the encoder is processed by an attention mechanism and its output is given as an input to the decoder. The attention mechanism is a feed-forward neural network (FNN) and the decoder a GRU. Then, the output of the decoder is given as an input to another FNN with a softmax non-linearity, which acts as a classifier and outputs the probability distribution of words for the $i$-th time-step. To optimize the parameters of the employed method, we use five times each audio sample, using its five different captions as targeted outputs each time. We optimize jointly the parameters of the encoder, attention mechanism, decoder, and the classifier, using 150 epochs, the cross entropy loss, and Adam optimizer BIBREF11 with proposed hyper-parameters. Also, in each batch we pad the captions of the batch to the longest in the same batch, using the end-of-sequence token, and the input audio features to the longest ones, by prepending zeros."
],
"extractive_spans": [
"we use a previously utilized method for audio captioning BIBREF3 which is based on an encoder-decoder scheme with attention"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to provide an example of how to employ Clotho and some initial (baseline) results, we use a previously utilized method for audio captioning BIBREF3 which is based on an encoder-decoder scheme with attention. The method accepts as an input a length-$T$ sequence of 64 log mel-band energies $\\mathbf {X}\\in \\mathbb {R}^{T\\times 64}$, which is used as an input to a DNN which outputs a probability distribution of words. The generated caption is constructed from the output of the DNN, as in BIBREF3. We optimize the parameters of the method using the development split of Clotho and we evaluate it using the evaluation and the testing splits, separately.\n\nWe first extract 64 log mel-band energies, using a Hamming window of 46 ms, with 50% overlap. We tokenize the captions of the development split, using a one-hot encoding of the words. Since all the words in in the development split appear in the other two splits as well, there are no unknown tokens/words. We also employ the start- and end-of-sequence tokens ($\\left<\\text{SOS}\\right>$ and $\\left<\\text{EOS}\\right>$ respectively), in order to signify the start and end of a caption.\n\nThe encoder is a series of bi-directional gated recurrent units (bi-GRUs) BIBREF10, similarly to BIBREF3. The output dimensionality for the GRU layers (forward and backward GRUs have same dimensionality) is $\\lbrace 256, 256, 256\\rbrace $. The output of the encoder is processed by an attention mechanism and its output is given as an input to the decoder. The attention mechanism is a feed-forward neural network (FNN) and the decoder a GRU. Then, the output of the decoder is given as an input to another FNN with a softmax non-linearity, which acts as a classifier and outputs the probability distribution of words for the $i$-th time-step. To optimize the parameters of the employed method, we use five times each audio sample, using its five different captions as targeted outputs each time. We optimize jointly the parameters of the encoder, attention mechanism, decoder, and the classifier, using 150 epochs, the cross entropy loss, and Adam optimizer BIBREF11 with proposed hyper-parameters. Also, in each batch we pad the captions of the batch to the longest in the same batch, using the end-of-sequence token, and the input audio features to the longest ones, by prepending zeros."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What domain do the audio samples fall under?",
"How did they evaluate the quality of annotations?",
"How many annotators did they have?",
"What is their baseline method?"
],
"question_id": [
"4eb42c5d56d695030dd47ea7f6d65164924c4017",
"eff9192e05d23e9a67d10be0c89a7ab2b873995b",
"87523fb927354ddc8ad1357a81f766b7ea95f53c",
"9e9aa8af4b49e2e1e8cd9995293a7982ea1aba0e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Distribution of tags in T0.01 for Xmed. Tags are sorted according to their frequency.",
"Table 1: Translation metrics for the evaluation and testing splits. Bn, C, M, and R correspond to BLEUn, CIDEr, METEOR, and ROUGE, respectively."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png"
]
} | [
"How did they evaluate the quality of annotations?"
] | [
[
"1910.09387-Creation of Clotho dataset ::: Captions collection and processing-0",
"1910.09387-Creation of Clotho dataset ::: Captions collection and processing-1"
]
] | [
"They manually check the captions and employ extra annotators to further revise the annotations."
] | 236 |
1809.03695 | Evaluating Multimodal Representations on Sentence Similarity: vSTS, Visual Semantic Textual Similarity Dataset | In this paper we introduce vSTS, a new dataset for measuring textual similarity of sentences using multimodal information. The dataset is comprised by images along with its respectively textual captions. We describe the dataset both quantitatively and qualitatively, and claim that it is a valid gold standard for measuring automatic multimodal textual similarity systems. We also describe the initial experiments combining the multimodal information. | {
"paragraphs": [
[
"The success of word representations (embeddings) learned from text has motivated analogous methods to learn representations of longer sequences of text such as sentences, a fundamental step on any task requiring some level of text understanding BIBREF0 . Sentence representation is a challenging task that has to consider aspects such as compositionality, phrase similarity, negation, etc. In order to evaluate sentence representations, intermediate tasks such as Semantic Textual Similarity (STS) BIBREF1 or Natural Language Inference (NLI) BIBREF2 have been proposed, with STS being popular among unsupervised approaches. Through a set of campaigns, STS has produced several manually annotated datasets, where annotators measure the similarity among sentences, with higher scores for more similar sentences, ranging between 0 (no similarity) to 5 (semantic equivalence). Human annotators exhibit high inter-tagger correlation in this task.",
"In another strand of related work, tasks that combine representations of multiple modalities have gained increasing attention, including image-caption retrieval, video and text alignment, caption generation, and visual question answering. A common approach is to learn image and text embeddings that share the same space so that sentence vectors are close to the representation of the images they describe BIBREF3 , BIBREF4 . BIBREF5 provides an approach that learns to align images with descriptions. Joint spaces are typically learned combining various types of deep learning networks such us recurrent networks or convolutional networks, with some attention mechanism BIBREF6 , BIBREF7 , BIBREF8 .",
"The complementarity of visual and text representations for improved language understanding have been shown also on word representations, where embeddings have been combined with visual or perceptual input to produce grounded representations of words BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . These improved representation models have outperformed traditional text-only distributional models on a series of word similarity tasks, showing that visual information coming from images is complementary to textual information.",
"In this paper we present Visual Semantic Textual Similarity (vSTS), a dataset which allows to study whether better sentence representations can be built when having access to corresponding images, e.g. a caption and its image, in contrast with having access to the text alone. This dataset is based on a subset of the STS benchmark BIBREF1 , more specifically, the so called STS-images subset, which contains pairs of captions. Note that the annotations are based on the textual information alone. vSTS extends the existing subset with images, and aims at being a standard dataset to test the contribution of visual information when evaluating sentence representations.",
"In addition we show that the dataset allows to explore two hypothesis: H1) whether the image representations alone are able to predict caption similarity; H2) whether a combination of image and text representations allow to improve the text-only results on this similarity task."
],
[
"The dataset is derived from a subset of the caption pairs already annotated in the Semantic Textual Similarity Task (see below). We selected some caption pairs with their similarity annotations, and added the images corresponding to each caption. While the human annotators had access to only the text, we provide the system with both the caption and corresponding image, to check whether the visual representations can be exploited by the system to solve a text understanding and inference task.",
"As the original dataset contained captions referring to the same image, and the task would be trivial for pairs of the same image, we filtered those out, that is, we only consider caption pairs that refer to different images. In total, the dataset comprises 829 instances, each instance containing a pair of images and their description, as well as a similarity value that ranges from 0 to 5. The instances are derived from the following datasets:",
"Subset 2014 This subset is derived from the Image Descriptions dataset which is a subset of the PASCAL VOC-2008 dataset BIBREF16 . PASCAL VOC-2008 dataset consists of 1,000 images and has been used by a number of image description systems. In total, we obtained 374 pairs (out of 750 in the original file).",
"Subset 2015 The subset is derived from Image Descriptions dataset, which is a subset of 8k-picture of Flickr. 8k-Flicker is a benchmark collection for sentence-based image description, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We obtained 445 pairs (out of 750 in the original).",
"Score distribution Due to the caption pairs are generated from different images, strong bias towards low scores is expected (see Figure FIGREF3 ). We measured the score distribution in the two subsets separately and jointly, and see that the two subsets follow same distribution. As expected, the most frequent score is 0 (Table TABREF2 ), but the dataset still shows wide range of similarity values, with enough variability.",
""
],
[
"Experimental setting We split the vSTS dataset into development and test partitions, sampling 50% at random, while preserving the overall score distributions. In addition, we used part of the text-only STS benchmark dataset as a training set, discarding the examples that overlap with vSTS.",
"STS Models We checked four models of different complexity and modalities. The baseline is a word overlap model (overlap), in which input texts are tokenized with white space, vectorized according to a word index, and similarity is computed as the cosine of the vectors. We also calculated the centroid of Glove word embeddings BIBREF17 (caverage) and then computed the cosine as a second text-based model. The third text-based model is the state of the art Decomposable Attention Model BIBREF18 (dam), trained on the STS benchmark dataset as explained above. Finally, we use the top layer of a pretrained resnet50 model BIBREF19 to represent the images associated to text, and use the cosine for computing the similarity of a pair of images (resnet50).",
"Model combinations We combined the predictions of text based models with the predictions of the image based model (see Table TABREF4 for specific combinations). Models are combined using addition ( INLINEFORM0 ), multiplication ( INLINEFORM1 ) and linear regression (LR) of the two outputs. We use 10-fold cross-validation on the development test for estimating the parameters of the linear regressor.",
"Results Table TABREF4 shows the results of the single and combined models. Among single models, as expected, dam obtains the highest Pearson correlation ( INLINEFORM0 ). Interestingly, the results show that images alone are valid to predict caption similarity (0.61 INLINEFORM1 ). Results also show that image and sentence representations are complementary, with the best results for a combination of DAM and RESNET50 representations. These results confirm our hypotheses, and more generally, show indications that in systems that work with text describing the real world, the representation of the real world helps to better understand the text and do better inferences."
],
[
"We introduced the vSTS dataset, which contains caption pairs with human similarity annotations, where the systems can also access the actual images. The dataset aims at being a standard dataset to test the contribution of visual information when evaluating the similarity of sentences.",
"Experiments confirmed our hypotheses: image representations are useful for caption similarity and they are complementary to textual representations, as results improve significantly when two modalities are combined together.",
"In the future we plan to re-annotate the dataset with scores which are based on both the text and the image, in order to shed light on the interplay of images and text when understanding text."
],
[
"This research was partially supported by the Spanish MINECO (TUNER TIN2015-65308-C5-1-R and MUSTER PCIN-2015-226)."
]
],
"section_name": [
"Introduction",
"The vSTS dataset",
"Experiments",
"Conclusions and further work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"3981ce9b185b4c9f9f3a10dfa2bf3f1cfab7e006",
"801aa02a9a15930ed72967af430a4c43de630585"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"72984520164e4f7ac0dd2a530e9083af6fdfe31b",
"ac554c80e160152523525b5861180d7b17282b61"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"90e8ddbf812df92a4a6dd373d618f3d5f30d6d07",
"ce760f61c60b213be0ffaaa17ef1c12fd36eb048"
],
"answer": [
{
"evidence": [
"As the original dataset contained captions referring to the same image, and the task would be trivial for pairs of the same image, we filtered those out, that is, we only consider caption pairs that refer to different images. In total, the dataset comprises 829 instances, each instance containing a pair of images and their description, as well as a similarity value that ranges from 0 to 5. The instances are derived from the following datasets:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As the original dataset contained captions referring to the same image, and the task would be trivial for pairs of the same image, we filtered those out, that is, we only consider caption pairs that refer to different images. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"As the original dataset contained captions referring to the same image, and the task would be trivial for pairs of the same image, we filtered those out, that is, we only consider caption pairs that refer to different images. In total, the dataset comprises 829 instances, each instance containing a pair of images and their description, as well as a similarity value that ranges from 0 to 5. The instances are derived from the following datasets:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In total, the dataset comprises 829 instances, each instance containing a pair of images and their description, as well as a similarity value that ranges from 0 to 5"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"04a0bc7419f3c78717d3a6b87ff0d46bc0142d77",
"93d80d2bb647a64067d717256bc38e5379482730"
],
"answer": [
{
"evidence": [
"As the original dataset contained captions referring to the same image, and the task would be trivial for pairs of the same image, we filtered those out, that is, we only consider caption pairs that refer to different images. In total, the dataset comprises 829 instances, each instance containing a pair of images and their description, as well as a similarity value that ranges from 0 to 5. The instances are derived from the following datasets:"
],
"extractive_spans": [
"829 instances"
],
"free_form_answer": "",
"highlighted_evidence": [
"In total, the dataset comprises 829 instances, each instance containing a pair of images and their description, as well as a similarity value that ranges from 0 to 5."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Subset 2014 This subset is derived from the Image Descriptions dataset which is a subset of the PASCAL VOC-2008 dataset BIBREF16 . PASCAL VOC-2008 dataset consists of 1,000 images and has been used by a number of image description systems. In total, we obtained 374 pairs (out of 750 in the original file).",
"Subset 2015 The subset is derived from Image Descriptions dataset, which is a subset of 8k-picture of Flickr. 8k-Flicker is a benchmark collection for sentence-based image description, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We obtained 445 pairs (out of 750 in the original)."
],
"extractive_spans": [],
"free_form_answer": "819",
"highlighted_evidence": [
"Subset 2014 This subset is derived from the Image Descriptions dataset which is a subset of the PASCAL VOC-2008 dataset BIBREF16 . ",
"In total, we obtained 374 pairs (out of 750 in the original file).",
"Subset 2015 The subset is derived from Image Descriptions dataset, which is a subset of 8k-picture of Flickr.",
"We obtained 445 pairs (out of 750 in the original)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"28fbb50b581d5093d1dbdee8cd224f43874f6229",
"ed80425a27ab2d894be62437338f8d5f68a66061"
],
"answer": [
{
"evidence": [
"As the original dataset contained captions referring to the same image, and the task would be trivial for pairs of the same image, we filtered those out, that is, we only consider caption pairs that refer to different images. In total, the dataset comprises 829 instances, each instance containing a pair of images and their description, as well as a similarity value that ranges from 0 to 5. The instances are derived from the following datasets:",
"Subset 2014 This subset is derived from the Image Descriptions dataset which is a subset of the PASCAL VOC-2008 dataset BIBREF16 . PASCAL VOC-2008 dataset consists of 1,000 images and has been used by a number of image description systems. In total, we obtained 374 pairs (out of 750 in the original file).",
"Subset 2015 The subset is derived from Image Descriptions dataset, which is a subset of 8k-picture of Flickr. 8k-Flicker is a benchmark collection for sentence-based image description, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We obtained 445 pairs (out of 750 in the original)."
],
"extractive_spans": [
" Image Descriptions dataset, which is a subset of 8k-picture of Flickr",
"Image Descriptions dataset which is a subset of the PASCAL VOC-2008 dataset BIBREF16"
],
"free_form_answer": "",
"highlighted_evidence": [
"The instances are derived from the following datasets:\n\nSubset 2014 This subset is derived from the Image Descriptions dataset which is a subset of the PASCAL VOC-2008 dataset BIBREF16 .",
"Subset 2015 The subset is derived from Image Descriptions dataset, which is a subset of 8k-picture of Flickr."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Subset 2014 This subset is derived from the Image Descriptions dataset which is a subset of the PASCAL VOC-2008 dataset BIBREF16 . PASCAL VOC-2008 dataset consists of 1,000 images and has been used by a number of image description systems. In total, we obtained 374 pairs (out of 750 in the original file).",
"Subset 2015 The subset is derived from Image Descriptions dataset, which is a subset of 8k-picture of Flickr. 8k-Flicker is a benchmark collection for sentence-based image description, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We obtained 445 pairs (out of 750 in the original)."
],
"extractive_spans": [
"PASCAL VOC-2008 dataset",
"8k-Flicker"
],
"free_form_answer": "",
"highlighted_evidence": [
"Subset 2014 This subset is derived from the Image Descriptions dataset which is a subset of the PASCAL VOC-2008 dataset BIBREF16 . ",
"Subset 2015 The subset is derived from Image Descriptions dataset, which is a subset of 8k-picture of Flickr. 8k-Flicker is a benchmark collection for sentence-based image description, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"In what language are the captions written in?",
"What is the average length of the captions?",
"Does each image have one caption?",
"What is the size of the dataset?",
"What is the source of the images and textual captions?"
],
"question_id": [
"1fa9b6300401530738995f14a37e074c48bc9fd8",
"9d98975ab0b75640b2c83e29e1438c76a959fbde",
"cc8bcea4052bf92f249dda276acc5fd16cac6fb4",
"35f48b8f73728fbdeb271b170804190b5448485a",
"16edc21a6abc89ee2280dccf1c867c2ac4552524"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 2. Pearson correlation r results in development and test. Note that A, B and C are text-only, and D is image-only.",
"Table 1. Main statistics of the dataset."
],
"file": [
"2-Table2-1.png",
"2-Table1-1.png"
]
} | [
"What is the size of the dataset?"
] | [
[
"1809.03695-The vSTS dataset-2",
"1809.03695-The vSTS dataset-3",
"1809.03695-The vSTS dataset-1"
]
] | [
"819"
] | 237 |
2002.08902 | Application of Pre-training Models in Named Entity Recognition | Named Entity Recognition (NER) is a fundamental Natural Language Processing (NLP) task to extract entities from unstructured data. The previous methods for NER were based on machine learning or deep learning. Recently, pre-training models have significantly improved performance on multiple NLP tasks. In this paper, firstly, we introduce the architecture and pre-training tasks of four common pre-training models: BERT, ERNIE, ERNIE2.0-tiny, and RoBERTa. Then, we apply these pre-training models to a NER task by fine-tuning, and compare the effects of the different model architecture and pre-training tasks on the NER task. The experiment results showed that RoBERTa achieved state-of-the-art results on the MSRA-2006 dataset. | {
"paragraphs": [
[
"Named Entity Recognition (NER) is a basic and important task in Natural Language Processing (NLP). It aims to recognize and classify named entities, such as person names and location namesBIBREF0. Extracting named entities from unstructured data can benefit many NLP tasks, for example Knowledge Graph (KG), Decision-making Support System (DSS), and Question Answering system. Researchers used rule-based and machine learning methods for the NER in the early yearsBIBREF1BIBREF2. Recently, with the development of deep learning, deep neural networks have improved the performance of NER tasksBIBREF3BIBREF4. However, it may still be inefficient to use deep neural networks because the performance of these methods depends on the quality of labeled data in training sets while creating annotations for unstructured data is especially difficultBIBREF5. Therefore, researchers hope to find an efficient method to extract semantic and syntactic knowledge from a large amount of unstructured data, which is also unlabeled. Then, apply the semantic and syntactic knowledge to improve the performance of NLP task effectively.",
"Recent theoretical developments have revealed that word embeddings have shown to be effective for improving many NLP tasks. The Word2Vec and Glove models represent a word as a word embedding, where similar words have similar word embeddingsBIBREF6. However, the Word2Vec and Glove models can not solve the problem of polysemy. Researchers have proposed some pre-training models, such as BERT, ERNIE, and RoBERTa, to learn contextualized word embeddings from unstructured text corpusBIBREF7BIBREF8BIBREF9. These models not only solve the problem of polysemy but also obtain more accurate word representations. Therefore, researchers pay more attention to how to apply these pre-training models to improve the performance of NLP tasks.",
"The purpose of this paper is to introduce the structure and pre-training tasks of four common pre-trained models (BERT, ERNIE, ERNIE2.0-tiny, RoBERTa), and how to apply these models to a NER task by fine-tuning. Moreover, we also conduct experiments on the MSRA-2006 dataset to test the effects of different pre-training models on the NER task, and discuss the reasons for these results from the model architecture and pre-training tasks respectively."
],
[
"Named entity recognition (NER) is the basic task of the NLP, such as information extraction and data mining. The main goal of the NER is to extract entities (persons, places, organizations and so on) from unstructured documents. Researchers have used rule-based and dictionary-based methods for the NERBIBREF1. Because these methods have poor generalization properties, researchers have proposed machine learning methods, such as Hidden Markov Model (HMM) and Conditional Random Field (CRF)BIBREF2BIBREF10. But machine learning methods require a lot of artificial features and can not avoid costly feature engineering. In recent years, deep learning, which is driven by artificial intelligence and cognitive computing, has been widely used in multiple NLP fields. Huang $et$ $al$. BIBREF3 proposed a model that combine the Bidirectional Long Short-Term Memory (BiLSTM) with the CRF. It can use both forward and backward input features to improve the performance of the NER task. Ma and Hovy BIBREF11 used a combination of the Convolutional Neural Networks (CNN) and the LSTM-CRF to recognize entities. Chiu and Nichols BIBREF12 improved the BiLSTM-CNN model and tested it on the CoNLL-2003 corpus."
],
[
"As mentioned above, the performance of deep learning methods depends on the quality of labeled training sets. Therefore, researchers have proposed pre-training models to improve the performance of the NLP tasks through a large number of unlabeled data. Recent research on pre-training models has mainly focused on BERT. For example, R. Qiao $et$ $al$. and N. Li $et$ $al$. BIBREF13BIBREF14 used BERT and ELMO respectively to improve the performance of entity recognition in chinese clinical records. E. Alsentzer $et$ $al$. , L. Yao $et$ $al$. and K. Huang $et$ $al$. BIBREF15BIBREF16BIBREF17 used domain-specific corpus to train BERT(the model structure and pre-training tasks are unchanged), and used this model for a domain-specific task, obtaining the result of SOTA."
],
[
"In this section, we first introduce the four pre-trained models (BERT, ERNIE, ERNIE 2.0-tiny, RoBERTa), including their model structures and pre-training tasks. Then we introduce how to use them for the NER task through fine-tuning."
],
[
"BERT is a pre-training model that learns the features of words from a large amount of corpus through unsupervised learningBIBREF7.",
"There are different kinds of structures of BERT models. We chose the BERT-base model structure. BERT-base's architecture is a multi-layer bidirectional TransformerBIBREF18. The number of layers is $L=12$, the hidden size is $H=768$, and the number of self-attention heads is $A=12$BIBREF7.",
"Unlike ELMO, BERT's pre-training tasks are not some kind of N-gram language model prediction tasks, but the \"Masked LM (MLM)\" and \"Next Sentence Prediction (NSP)\" tasks. For MLM, like a $Cloze$ task, the model mask 15% of all tokens in each input sequence at random, and predict the masked token. For NSP, the input sequences are sentence pairs segmented with [SEQ]. Among them, only 50% of the sentence pairs are positive samples."
],
[
"ERNIE is also a pre-training language model. In addition to a basic-level masking strategy, unlike BERT, ERNIE using entity-level and phrase-level masking strategies to obtain the language representations enhanced by knowledge BIBREF8.",
"ERNIE has the same model structure as BERT-base, which uses 12 Transformer encoder layers, 768 hidden units and 12 attention heads.",
"As mentioned above, ERNIE using three masking strategies: basic-level masking, phrase-level masking, and entity-level masking. the basic-level making is to mask a character and train the model to predict it. Phrase-level and entity-level masking are to mask a phrase or an entity and predict the masking part. In addition, ERNIE also performs the \"Dialogue Language Model (DLM)\" task to judge whether a multi-turn conversation is real or fake BIBREF8."
],
[
"ERNIE2.0 is a continual pre-training framework. It could incrementally build and train a large variety of pre-training tasks through continual multi-task learning BIBREF19.",
"ERNIE2.0-tiny compresses ERNIE 2.0 through the method of structure compression and model distillation. The number of Transformer layers is reduced from 12 to 3, and the number of hidden units is increased from 768 to 1024.",
"ERNIE2.0-tiny's pre-training task is called continual pre-training. The process of continual pre-training including continually constructing unsupervised pre-training tasks with big data and updating the model via multi-task learning. These tasks include word-aware tasks, structure-aware tasks, and semantic-aware tasks."
],
[
"RoBERTa is similar to BERT, except that it changes the masking strategy and removes the NSP taskBIBREF9.",
"Like ERNIE, RoBERTa has the same model structure as BERT, with 12 Transformer layers, 768 hidden units, and 12 self-attention heads.",
"RoBERTa removes the NSP task in BERT and changes the masking strategy from static to dynamicBIBREF9. BERT performs masking once during data processing, resulting in a single static mask. However, RoBoERTa changes masking position in every epoch. Therefore, the pre-training model will gradually adapt to different masking strategies and learn different language representations."
],
[
"After the pre-training process, pre-training models obtain abundant semantic knowledge from unlabeled pre-training corpus through unsupervised learning. Then, we use the fine-tuning approach to apply pre-training models in downstream tasks. As shown in Figure 1, we add the Fully Connection (FC) layer and the CRF layer after the output of pre-training models. The vectors output by pre-training models can be regarded as the representations of input sentences. Therefore, we use a fully connection layer to obtain the higher level and more abstract representations. The tags of the output sequence have strong restrictions and dependencies. For example, \"I-PER\" must appear after \"B-PER\". Conditional Random Field, as an undirected graphical model, can obtain dependencies between tags. We add the CRF layer to ensure the output order of tags."
],
[
"We conducted experiments on Chinese NER datasets to demonstrate the effectiveness of the pre-training models specified in section III. For the dataset, we used the MSRA-2006 published by Microsoft Research Asia.",
"The experiments were conducted on the AI Studio platform launched by the Baidu. This platform has a build-in deep learning framework PaddlePaddle and is equipped with a V100 GPU. The pre-training models mentioned above were downloaded by PaddleHub, which is a pre-training model management toolkit. It is also launched by the Baidu. For hyper-parameter configuration, we adjusted them according to the performance on development sets. In this article, the number of the epoch is 2, the learning rate is 5e-5, and the batch size is 16.",
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
[
"This section discusses the experimental results in detail. We will analyze the different model structures and pre-training tasks on the effect of the NER task.",
"First of all, it is shown that the deeper the layer, the better the performance. All pre-training models have 12 Transformer layers, except ERNIE2.0-tiny. Although Ernie2.0-tiny increases the number of hidden units and improves the pre-training task with continual pre-training, 3 Transformer layers can not extract semantic knowledge well. The F1 value of ERNIE-2.0-tiny is even lower than the baseline model.",
"Secondly, for pre-training models with the same model structure, RoBERTa obtained the result of SOTA. BERT and ERNIE retain the sentence pre-training tasks of NSP and DLM respectively, while RoBERTa removes the sentence-level pre-training task because Liu $et$ $al$. BIBREF9 hypothesizes the model can not learn long-range dependencies. The results confirm the above hypothesis. For the NER task, sentence-level pre-training tasks do not improve performance. In contrast, RoBERTa removes the NSP task and improves the performance of entity recognition. As described by Liu $et$ $al$. BIBREF9, the NSP and the MLP are designed to improve the performance on specific downstream tasks, such as the SQuAD 1.1, which requires reasoning about the relationships between pairs of sentences. However, the results show that the NER task does not rely on sentence-level knowledge, and using sentence-level pre-training tasks hurts performance because the pre-training models may not able to learn long-range dependencies.",
"Moreover, as mentioned before, RoBERTa could adapt to different masking strategies and acquires richer semantic representations with the dynamic masking strategy. In contrast, BERT and ERNIE use the static masking strategy in every epoch. In addition, the results in this paper show that the F1 value of ERNIE is slightly lower than BERT. We infer that ERNIE may introduce segmentation errors when performing entity-level and phrase-level masking."
],
[
"In this paper, we exploit four pre-training models (BERT, ERNIE, ERNIE2.0-tiny, RoBERTa) for the NER task. Firstly, we introduce the architecture and pre-training tasks of these pre-training models. Then, we apply the pre-training models to the target task through a fine-tuning approach. During fine-tuning, we add a fully connection layer and a CRF layer after the output of pre-training models. Results showed that using the pre-training models significantly improved the performance of recognition. Moreover, results provided a basis that the structure and pre-training tasks in RoBERTa model are more suitable for NER tasks.",
"In future work, investigating the model structure of different downstream tasks might prove important."
],
[
"This research was funded by the major special project of Anhui Science and Technology Department (Grant: 18030801133) and Science and Technology Service Network Initiative (Grant: KFJ-STS-ZDTP-079)."
]
],
"section_name": [
"Introduction",
"Related work ::: Named Entity Recognition",
"Related work ::: Pre-training model",
"Methods",
"Methods ::: BERT",
"Methods ::: ERNIE",
"Methods ::: ERNIE2.0-tiny",
"Methods ::: RoBERTa",
"Methods ::: Applying Pre-training Models",
"Experiments and Results",
"Discussion",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"72a8d0c759f1ae62645286bc6f4ff029fa20382e",
"7b46e3943a46acd2d90a79d2288ecf8b95113e64"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model.",
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"extractive_spans": [],
"free_form_answer": "Precision, recall and F1 score.",
"highlighted_evidence": [
"FLOAT SELECTED: Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model.",
"Table I shows that the baseline model has already achieved an F1 value of 90.32."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model.",
"FLOAT SELECTED: Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"extractive_spans": [],
"free_form_answer": "Precision \nRecall\nF1",
"highlighted_evidence": [
"Table I shows that the baseline model has already achieved an F1 value of 90.32.",
"FLOAT SELECTED: Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"0d65dc0d6ccf874c320ae41e940f6f6144155d72",
"0f5eb549649eb33811c1b365ec351b9105eafc30"
],
"answer": [
{
"evidence": [
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"extractive_spans": [
"BiGRU+CRF"
],
"free_form_answer": "",
"highlighted_evidence": [
"The BiGRU+CRF model was used as the baseline model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"extractive_spans": [
"BiGRU+CRF"
],
"free_form_answer": "",
"highlighted_evidence": [
"The BiGRU+CRF model was used as the baseline model. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"04b6d589a7b9d893dfcd8ffe2734e5b8c0d3ba78",
"a86e2213b05b64301474fa0b612a186a331eb584"
],
"answer": [
{
"evidence": [
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"extractive_spans": [
" the RoBERTa model achieves the highest F1 value of 94.17"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"extractive_spans": [
"F1 value of 94.17"
],
"free_form_answer": "",
"highlighted_evidence": [
"Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"427e0803d1c2732f0a0d13a7b46b0daac5f5d0cc",
"c0cf694327227ed839d8a9f3fb8c0e29a5a032b1"
],
"answer": [
{
"evidence": [
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"extractive_spans": [
"ERNIE-tiny"
],
"free_form_answer": "",
"highlighted_evidence": [
"Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The BiGRU+CRF model was used as the baseline model. Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"extractive_spans": [
"ERNIE-tiny"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what evaluation metrics did they use?",
"what was the baseline?",
"what were roberta's results?",
"which was the worst performing model?"
],
"question_id": [
"3b8da74f5b359009d188cec02adfe4b9d46a768f",
"6bce04570d4745dcfaca5cba64075242308b65cf",
"37e6ce5cfc9d311e760dad8967d5085446125408",
"6683008e0a8c4583058d38e185e2e2e18ac6cf50"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1. Fine-tuning",
"Table I shows that the baseline model has already achieved an F1 value of 90.32. However, using the pre-training models can significantly increase F1 values by 1 to 2 percentage points except for ERNIE-tiny model. Among the pre-training models, the RoBERTa model achieves the highest F1 value of 94.17, while the value of ERNIE-tiny is relatively low, even 4 percentage points lower than the baseline model."
],
"file": [
"2-Figure1-1.png",
"3-TableI-1.png"
]
} | [
"what evaluation metrics did they use?"
] | [
[
"2002.08902-Experiments and Results-2",
"2002.08902-3-TableI-1.png"
]
] | [
"Precision \nRecall\nF1"
] | 238 |
2002.04815 | Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language Inference | Aspect based sentiment analysis aims to identify the sentimental tendency towards a given aspect in text. Fine-tuning of pretrained BERT performs excellent on this task and achieves state-of-the-art performances. Existing BERT-based works only utilize the last output layer of BERT and ignore the semantic knowledge in the intermediate layers. This paper explores the potential of utilizing BERT intermediate layers to enhance the performance of fine-tuning of BERT. To the best of our knowledge, no existing work has been done on this research. To show the generality, we also apply this approach to a natural language inference task. Experimental results demonstrate the effectiveness and generality of the proposed approach. | {
"paragraphs": [
[
"Aspect based sentiment analysis (ABSA) is an important task in natural language processing. It aims at collecting and analyzing the opinions toward the targeted aspect in an entire text. In the past decade, ABSA has received great attention due to a wide range of applications BIBREF0, BIBREF1. Aspect-level (also mentioned as “target-level”) sentiment classification as a subtask of ABSA BIBREF0 aims at judging the sentiment polarity for a given aspect. For example, given a sentence “I hated their service, but their food was great”, the sentiment polarities for the target “service” and “food” are negative and positive respectively.",
"Most of existing methods focus on designing sophisticated deep learning models to mining the relation between context and the targeted aspect. Majumder et al., majumder2018iarm adopt a memory network architecture to incorporate the related information of neighboring aspects. Fan et al., fan2018multi combine the fine-grained and coarse-grained attention to make LSTM treasure the aspect-level interactions. However, the biggest challenge in ABSA task is the shortage of training data, and these complex models did not lead to significant improvements in outcomes.",
"Pre-trained language models can leverage large amounts of unlabeled data to learn the universal language representations, which provide an effective solution for the above problem. Some of the most prominent examples are ELMo BIBREF2, GPT BIBREF3 and BERT BIBREF4. BERT is based on a multi-layer bidirectional Transformer, and is trained on plain text for masked word prediction and next sentence prediction tasks. The pre-trained BERT model can then be fine-tuned on downstream task with task-specific training data. Sun et al., sun2019utilizing utilize BERT for ABSA task by constructing a auxiliary sentences, Xu et al., xu2019bert propose a post-training approach for ABSA task, and Liu et al., liu2019multi combine multi-task learning and pre-trained BERT to improve the performance of various NLP tasks. However, these BERT-based studies follow the canonical way of fine-tuning: append just an additional output layer after BERT structure. This fine-tuning approach ignores the rich semantic knowledge contained in the intermediate layers. Due to the multi-layer structure of BERT, different layers capture different levels of representations for the specific task after fine-tuning.",
"This paper explores the potential of utilizing BERT intermediate layers for facilitating BERT fine-tuning. On the basis of pre-trained BERT, we add an additional pooling module, design some pooling strategies for integrating the multi-layer representations of the classification token. Then, we fine tune the pre-trained BERT model with this additional pooling module and achieve new state-of-the-art results on ABSA task. Additional experiments on a large Natural Language Inference (NLI) task illustrate that our method can be easily applied to more NLP tasks with only a minor adjustment.",
"Main contributions of this paper can be summarized as follows:",
"It is the first to explore the potential of utilizing intermediate layers of BERT and we design two effective information pooling strategies to solve aspect based sentiment analysis task.",
"Experimental results on ABSA datasets show that our method is better than the vanilla BERT model and can boost other BERT-based models with a minor adjustment.",
"Additional experiments on a large NLI dataset illustrate that our method has a certain degree of versatility, and can be easily applied to some other NLP tasks."
],
[
"Given a sentence-apsect pair, ABSA aims at predicting the sentiment polarity (positive, negative or neural) of the sentence over the aspect."
],
[
"Given a pair of sentences, the goal is to predict whether a sentence is an entailment, contradiction, or neutral with respect to the other sentence."
],
[
"Given the hidden states of the first token (i.e., [CLS] token) $\\mathbf {h}_{\\tiny \\textsc {CLS}} = \\lbrace h_{\\tiny \\textsc {CLS}}^1, h_{\\tiny \\textsc {CLS}}^2, ..., h_{\\tiny \\textsc {CLS}}^L\\rbrace $ from all $L$ intermediate layers. The canonical way of fine-tuning simply take the final one (i.e., $h_{\\tiny \\textsc {CLS}}^L$) for classification, which may inevitably lead to information losing during fine-tuning. We design two pooling strategies for utilizing $\\mathbf {h}_{\\tiny \\textsc {CLS}}$: LSTM-Pooling and Attention-Pooling. Accordingly, the models are named BERT-LSTM and BERT-Attention. The overview of BERT-LSTM is shown in Figure FIGREF8. Similarly, BERT-Attention replaces the LSTM module with an attention module."
],
[
"Representation of the hidden states $\\mathbf {h}_{\\tiny \\textsc {CLS}}$ is a special sequence: an abstract-to-specific sequence. Since LSTM network is inherently suitable for processing sequential information, we use a LSTM network to connect all intermediate representations of the [CLS] token, and the output of the last LSTM cell is used as the final representation. Formally,"
],
[
"Intuitively, attention operation can learn the contribution of each $h_{\\tiny \\textsc {CLS}}^i$. We use a dot-product attention module to dynamically combine all intermediates:",
"where $W_h^T$ and $\\mathbf {q}$ are learnable weights.",
"Finally, we pass the pooled output $o$ to a fully-connected layer for label prediction:"
],
[
"In this section, we present our methods for BERT-based model fine-tuning on three ABSA datasets. To show the generality, we also conduct experiments on a large and popular NLI task. We also apply the same strategy to existing state-of-the-art BERT-based models and demonstrate the effectiveness of our approaches."
],
[
"This section briefly describes three ABSA datasets and SNLI dataset. Statistics of these datasets are shown in Table TABREF15."
],
[
"We use three popular datasets in ABSA task: Restaurant reviews and Laptop reviews from SemEval 2014 Task 4 BIBREF5, and ACL 14 Twitter dataset BIBREF6."
],
[
"The Stanford Natural Language Inference BIBREF7 dataset contains 570k human annotated hypothesis/premise pairs. This is the most widely used entailment dataset for natural language inference."
],
[
"All experiments are conducted with BERT$_{\\tiny \\textsc {BASE}}$ (uncased) with different weights. During training, the coefficient $\\lambda $ of $\\mathcal {L}_2$ regularization item is $10^{-5}$ and dropout rate is 0.1. Adam optimizer BIBREF8 with learning rate of 2e-5 is applied to update all the parameters. The maximum number of epochs was set to 10 and 5 for ABSA and SNLI respectively. In this paper, we use 10-fold cross-validation, which performs quite stable in ABSA datasets.",
"Since the sizes of ABSA datasets are small and there is no validation set, the results between two consecutive epochs may be significantly different. In order to conduct fair and rigorous experiments, we use 10-fold cross-validation for ABSA task, which achieves quite stable results. The final result is obtained as the average of 10 individual experiments.",
"The SNLI dataset is quite large, so we simply take the best-performing model on the development set for testing."
],
[
"Since BERT outperforms previous non-BERT-based studies on ABSA task by a large margin, we are not going to compare our models with non-BERT-based models. The 10-fold cross-validation results on ABSA datasets are presented in Table TABREF19.",
"The BERT$_{\\tiny \\textsc {BASE}}$, BERT-LSTM and BERT-Attention are both initialized with pre-trained BERT$_{\\tiny \\textsc {BASE}}$ (uncased). We observe that BERT-LSTM and BERT-Attention outperform vanilla BERT$_{\\tiny \\textsc {BASE}}$ model on all three datasets. Moreover, BERT-LSTM and BERT-Attention have respective advantages on different datasets. We suspect the reason is that Attention-Pooling and LSTM-Pooling perform differently during fine-tuning on different datasets. Overall, our pooling strategies strongly boost the performance of BERT on these datasets.",
"The BERT-PT, BERT-PT-LSTM and BERT-PT-Attention are all initialized with post-trained BERT BIBREF9 weights . We can see that both BERT-PT-LSTM and BERT-PT-Attention outperform BERT-PT with a large margin on Laptop and Restaurant dataset . From the results, the conclusion that utilizing intermediate layers of BERT brings better results is still true."
],
[
"In order to visualize how BERT-LSTM benefits from sequential representations of intermediate layers, we use principal component analysis (PCA) to visualize the intermediate representations of [CLS] token, shown in figure FIGREF20. There are three classes of the sentiment data, illustrated in blue, green and red, representing positive, neural and negative, respectively. Since the task-specific information is mainly extracted from the last six layers of BERT, we simply illustrate the last six layers. It is easy to draw the conclusion that BERT-LSTM partitions different classes of data faster and more dense than vanilla BERT under the same training epoch."
],
[
"To validate the generality of our method, we conduct experiment on SNLI dataset and apply same pooling strategies to currently state-of-the-art method MT-DNN BIBREF11, which is also a BERT based model, named MT-DNN-Attention and MT-DNN-LSTM.",
"As shown in Table TABREF26, the results were consistent with those on ABSA. From the results, BERT-Attention and BERT-LSTM perform better than vanilla BERT$_{\\tiny \\textsc {BASE}}$. Furthermore, MT-DNN-Attention and MT-DNN-LSTM outperform vanilla MT-DNN on Dev set, and are slightly inferior to vanilla MT-DNN on Test set. As a whole, our pooling strategies generally improve the vanilla BERT-based model, which draws the same conclusion as on ABSA.",
"The gains seem to be small, but the improvements of the method are straightforwardly reasonable and the flexibility of our strategies makes it easier to apply to a variety of other tasks."
],
[
"In this work, we explore the potential of utilizing BERT intermediate layers and propose two effective pooling strategies to enhance the performance of fine-tuning of BERT. Experimental results demonstrate the effectiveness and generality of the proposed approach."
]
],
"section_name": [
"Introduction",
"Methodology ::: Task description ::: ABSA",
"Methodology ::: Task description ::: NLI",
"Methodology ::: Utilizing Intermediate Layers: Pooling Module",
"Methodology ::: Utilizing Intermediate Layers: Pooling Module ::: LSTM-Pooling",
"Methodology ::: Utilizing Intermediate Layers: Pooling Module ::: Attention-Pooling",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Datasets ::: ABSA",
"Experiments ::: Datasets ::: SNLI",
"Experiments ::: Experiment Settings",
"Experiments ::: Experiment-I: ABSA",
"Experiments ::: Experiment-I: ABSA ::: Visualization of Intermediate Layers",
"Experiments ::: Experiment-II: SNLI",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"480ec61fb8f50411aa14f5ff17a882192f9eacf3",
"b62da1cb00965c83700000450001028653a31acb"
],
"answer": [
{
"evidence": [
"This section briefly describes three ABSA datasets and SNLI dataset. Statistics of these datasets are shown in Table TABREF15.",
"FLOAT SELECTED: Table 1: Summary of the datasets. For ABSA dataset, we randomly chose 10% of #Train as #Dev as there is no #Dev in official dataset."
],
"extractive_spans": [],
"free_form_answer": "Three datasets had total of 14.5k samples.",
"highlighted_evidence": [
"Statistics of these datasets are shown in Table TABREF15.",
"FLOAT SELECTED: Table 1: Summary of the datasets. For ABSA dataset, we randomly chose 10% of #Train as #Dev as there is no #Dev in official dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Summary of the datasets. For ABSA dataset, we randomly chose 10% of #Train as #Dev as there is no #Dev in official dataset."
],
"extractive_spans": [],
"free_form_answer": "2900, 4700, 6900",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Summary of the datasets. For ABSA dataset, we randomly chose 10% of #Train as #Dev as there is no #Dev in official dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"c1793dd2d4bf7559d6e84ac320ee4ff6c72d6895",
"eeedc811d49963f287518d1598867d0844fd3790"
],
"answer": [
{
"evidence": [
"The Stanford Natural Language Inference BIBREF7 dataset contains 570k human annotated hypothesis/premise pairs. This is the most widely used entailment dataset for natural language inference."
],
"extractive_spans": [
"Stanford Natural Language Inference BIBREF7"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Stanford Natural Language Inference BIBREF7 dataset contains 570k human annotated hypothesis/premise pairs. This is the most widely used entailment dataset for natural language inference."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This section briefly describes three ABSA datasets and SNLI dataset. Statistics of these datasets are shown in Table TABREF15."
],
"extractive_spans": [
"SNLI"
],
"free_form_answer": "",
"highlighted_evidence": [
"This section briefly describes three ABSA datasets and SNLI dataset"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"099c08a659d0714604e50259526e9ee95e4b5bdc",
"487bf85c60c97033b6b43875bb246b85792692b1"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Intuitively, attention operation can learn the contribution of each $h_{\\tiny \\textsc {CLS}}^i$. We use a dot-product attention module to dynamically combine all intermediates:",
"where $W_h^T$ and $\\mathbf {q}$ are learnable weights."
],
"extractive_spans": [
"dot-product attention module to dynamically combine all intermediates"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use a dot-product attention module to dynamically combine all intermediates:\n\nwhere $W_h^T$ and $\\mathbf {q}$ are learnable weights."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"04b785a67882120037ad16e5bb3a889697ff8b4e",
"e5fbeb06e9814f5beceff7e3b95840ee09c3b183"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 2: Visualization of BERT and BERT-LSTM on Twitter dataset with the last six intermediates layers of BERT at the end of the 1st and 6th epoch. Among the PCA results, (a) and (b) illustrate that BERT-LSTM converges faster than BERT after just one epoch, while (c) and (d) demonstrate that BERT-LSTM cluster each class of data more dense and discriminative than BERT after the model nearly converges."
],
"extractive_spans": [],
"free_form_answer": "12",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: Visualization of BERT and BERT-LSTM on Twitter dataset with the last six intermediates layers of BERT at the end of the 1st and 6th epoch. Among the PCA results, (a) and (b) illustrate that BERT-LSTM converges faster than BERT after just one epoch, while (c) and (d) demonstrate that BERT-LSTM cluster each class of data more dense and discriminative than BERT after the model nearly converges."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As shown in Table TABREF26, the results were consistent with those on ABSA. From the results, BERT-Attention and BERT-LSTM perform better than vanilla BERT$_{\\tiny \\textsc {BASE}}$. Furthermore, MT-DNN-Attention and MT-DNN-LSTM outperform vanilla MT-DNN on Dev set, and are slightly inferior to vanilla MT-DNN on Test set. As a whole, our pooling strategies generally improve the vanilla BERT-based model, which draws the same conclusion as on ABSA."
],
"extractive_spans": [
"BERT-Attention and BERT-LSTM perform better than vanilla BERT$_{\\tiny \\textsc {BASE}}$"
],
"free_form_answer": "",
"highlighted_evidence": [
"From the results, BERT-Attention and BERT-LSTM perform better than vanilla BERT$_{\\tiny \\textsc {BASE}}$."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How long is their sentiment analysis dataset?",
"What NLI dataset was used?",
"What aspects are considered?",
"What layer gave the better results?"
],
"question_id": [
"7bd24920163a4801b34d0a50aed957ba8efed0ab",
"df01e98095ba8765d9ab0d40c9e8ef34b64d3700",
"a7a433de17d0ee4dd7442d7df7de17e508baf169",
"abfa3daaa984dfe51289054f4fb062ce93f31d19"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"BERT Sentiment Analysis",
"BERT Sentiment Analysis",
"BERT Sentiment Analysis",
"BERT Sentiment Analysis"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Overview of the proposed BERT-LSTM model. Pooling Module is responsible for connecting the intermediate representations obtained by Transformers of BERT.",
"Table 1: Summary of the datasets. For ABSA dataset, we randomly chose 10% of #Train as #Dev as there is no #Dev in official dataset.",
"Table 2: Accuracy and macro-F1 (%) for aspect based sentiment analysis on three popular datasets.",
"Figure 2: Visualization of BERT and BERT-LSTM on Twitter dataset with the last six intermediates layers of BERT at the end of the 1st and 6th epoch. Among the PCA results, (a) and (b) illustrate that BERT-LSTM converges faster than BERT after just one epoch, while (c) and (d) demonstrate that BERT-LSTM cluster each class of data more dense and discriminative than BERT after the model nearly converges.",
"Table 3: Classification accuracy (%) for natural language inference on SNLI dataset. Results with “*” are obtained from the official SNLI leaderboard (https://nlp.stanford.edu/projects/snli/)."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Figure2-1.png",
"4-Table3-1.png"
]
} | [
"How long is their sentiment analysis dataset?",
"What layer gave the better results?"
] | [
[
"2002.04815-Experiments ::: Datasets-0",
"2002.04815-3-Table1-1.png"
],
[
"2002.04815-Experiments ::: Experiment-II: SNLI-1",
"2002.04815-4-Figure2-1.png"
]
] | [
"2900, 4700, 6900",
"12"
] | 239 |
1704.03279 | Unfolding and Shrinking Neural Machine Translation Ensembles | Ensembling is a well-known technique in neural machine translation (NMT) to improve system performance. Instead of a single neural net, multiple neural nets with the same topology are trained separately, and the decoder generates predictions by averaging over the individual models. Ensembling often improves the quality of the generated translations drastically. However, it is not suitable for production systems because it is cumbersome and slow. This work aims to reduce the runtime to be on par with a single system without compromising the translation quality. First, we show that the ensemble can be unfolded into a single large neural network which imitates the output of the ensemble system. We show that unfolding can already improve the runtime in practice since more work can be done on the GPU. We proceed by describing a set of techniques to shrink the unfolded network by reducing the dimensionality of layers. On Japanese-English we report that the resulting network has the size and decoding speed of a single NMT network but performs on the level of a 3-ensemble system. | {
"paragraphs": [
[
"The top systems in recent machine translation evaluation campaigns on various language pairs use ensembles of a number of NMT systems BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . Ensembling BIBREF7 , BIBREF8 of neural networks is a simple yet very effective technique to improve the accuracy of NMT. The decoder makes use of INLINEFORM0 NMT networks which are either trained independently BIBREF9 , BIBREF2 , BIBREF3 , BIBREF4 or share some amount of training iterations BIBREF10 , BIBREF1 , BIBREF5 , BIBREF6 . The ensemble decoder computes predictions from each of the individual models which are then combined using the arithmetic average BIBREF9 or the geometric average BIBREF5 .",
"Ensembling consistently outperforms single NMT by a large margin. However, the decoding speed is significantly worse since the decoder needs to apply INLINEFORM0 NMT models rather than only one. Therefore, a recent line of research transfers the idea of knowledge distillation BIBREF11 , BIBREF12 to NMT and trains a smaller network (the student) by minimizing the cross-entropy to the output of the ensemble system (the teacher) BIBREF13 , BIBREF14 . This paper presents an alternative to knowledge distillation as we aim to speed up decoding to be comparable to single NMT while retaining the boost in translation accuracy from the ensemble. In a first step, we describe how to construct a single large neural network which imitates the output of an ensemble of multiple networks with the same topology. We will refer to this process as unfolding. GPU-based decoding with the unfolded network is often much faster than ensemble decoding since more work can be done on the GPU. In a second step, we explore methods to reduce the size of the unfolded network. This idea is justified by the fact that ensembled neural networks are often over-parameterized and have a large degree of redundancy BIBREF15 , BIBREF16 , BIBREF17 . Shrinking the unfolded network leads to a smaller model which consumes less space on the disk and in the memory; a crucial factor on mobile devices. More importantly, the decoding speed on all platforms benefits greatly from the reduced number of neurons. We find that the dimensionality of linear embedding layers in the NMT network can be reduced heavily by low-rank matrix approximation based on singular value decomposition (SVD). This suggest that high dimensional embedding layers may be needed for training, but do not play an important role for decoding. The NMT network, however, also consists of complex layers like gated recurrent units BIBREF18 and attention BIBREF19 . Therefore, we introduce a novel algorithm based on linear combinations of neurons which can be applied either during training (data-bound) or directly on the weight matrices without using training data (data-free). We report that with a mix of the presented shrinking methods we are able to reduce the size of the unfolded network to the size of the single NMT network while keeping the boost in BLEU score from the ensemble. Depending on the aggressiveness of shrinking, we report either a gain of 2.2 BLEU at the same decoding speed, or a 3.4 INLINEFORM1 CPU decoding speed up with only a minor drop in BLEU compared to the original single NMT system. Furthermore, it is often much easier to stage a single NMT system than an ensemble in a commercial MT workflow, and it is crucial to be able to optimize quality at specific speed and memory constraints. Unfolding and shrinking address these problems directly."
],
[
"The first concept of our approach is called unfolding. Unfolding is an alternative to ensembling of multiple neural networks with the same topology. Rather than averaging their predictions, unfolding constructs a single large neural net out of the individual models which has the same number of input and output neurons but larger inner layers. Our main motivation for unfolding is to obtain a single network with ensemble level performance which can be shrunk with the techniques in Sec. SECREF3 .",
"Suppose we ensemble two single layer feedforward neural nets as shown in Fig. FIGREF1 . Normally, ensembling is implemented by performing an isolated forward pass through the first network (Fig. SECREF2 ), another isolated forward pass through the second network (Fig. SECREF3 ), and averaging the activities in the output layers of both networks. This can be simulated by merging both networks into a single large network as shown in Fig. SECREF4 . The first neurons in the hidden layer of the combined network correspond to the hidden layer in the first single network, and the others to the hidden layer of the second network. A single pass through the combined network yields the same output as the ensemble if the output layer is linear (up to a factor 2). The weight matrices in the unfolded network can be constructed by stacking the corresponding weight matrices (either horizontally or vertically) in network 1 and 2. This kind of aggregation of multiple networks with the same topology is not only possible for single-layer feedforward architectures but also for complex networks consisting of multiple GRU layers and attention.",
"For a formal description of unfolding we address layers with indices INLINEFORM0 . The special layer 0 has a single neuron for modelling bias vectors. Layer 1 holds the input neurons and layer INLINEFORM1 is the output layer. We denote the size of a layer in the individual models as INLINEFORM2 . When combining INLINEFORM3 networks, the layer size INLINEFORM4 in the unfolded network is increased by factor INLINEFORM5 if INLINEFORM6 is an inner layer, and equal to INLINEFORM7 if INLINEFORM8 is the input or output layer. We denote the weight matrix between two layers INLINEFORM9 in the INLINEFORM10 -th individual model ( INLINEFORM11 ) as INLINEFORM12 , and the corresponding weight matrix in the unfolded network as INLINEFORM13 . We explicitly allow INLINEFORM14 and INLINEFORM15 to be non-consecutive or reversed to be able to model recurrent networks. We use the zero-matrix if layers INLINEFORM16 and INLINEFORM17 are not connected. The construction of the unfolded weight matrix INLINEFORM18 from the individual matrices INLINEFORM19 depends on whether the connected layers are inner layers or not. The complete formula is listed in Fig. FIGREF5 .",
"Unfolded NMT networks approximate but do not exactly match the output of the ensemble due to two reasons. First, the unfolded network synchronizes the attentions of the individual models. Each decoding step in the unfolded network computes a single attention weight vector. In contrast, ensemble decoding would compute one attention weight vector for each of the INLINEFORM0 input models. A second difference is that the ensemble decoder first applies the softmax at the output layer, and then averages the prediction probabilities. The unfolded network averages the neuron activities (i.e. the logits) first, and then applies the softmax function. Interestingly, as shown in Sec. SECREF4 , these differences do not have any impact on the BLEU score but yield potential speed advantages of unfolding since the computationally expensive softmax layer is only applied once."
],
[
"After constructing the weight matrices of the unfolded network we reduce the size of it by iteratively shrinking layer sizes. In this section we denote the incoming weight matrix of the layer to shrink as INLINEFORM0 and the outgoing weight matrix as INLINEFORM1 . Our procedure is inspired by the method of Srinivas and Babu sparsify-datafree. They propose a criterion for removing neurons in inner layers of the network based on two intuitions. First, similarly to Hebb's learning rule, they detect redundancy by the principle neurons which fire together, wire together. If the incoming weight vectors INLINEFORM2 and INLINEFORM3 are exactly the same for two neurons INLINEFORM4 and INLINEFORM5 , we can remove the neuron INLINEFORM6 and add its outgoing connections to neuron INLINEFORM7 ( INLINEFORM8 ) without changing the output. This holds since the activity in neuron INLINEFORM14 will always be equal to the activity in neuron INLINEFORM15 . In practice, Srinivas and Babu use a distance measure based on the difference of the incoming weight vectors to search for similar neurons as exact matches are very rare.",
"The second intuition of the criterion used by Srinivas and Babu sparsify-datafree is that neurons with small outgoing weights contribute very little overall. Therefore, they search for a pair of neurons INLINEFORM0 according the following term and remove the INLINEFORM1 -th neuron. DISPLAYFORM0 ",
"Neuron INLINEFORM0 is selected for removal if (1) there is another neuron INLINEFORM1 which has a very similar set of incoming weights and if (2) INLINEFORM2 has a small outgoing weight vector. Their criterion is data-free since it does not require any training data. For further details we refer to Srinivas and Babu sparsify-datafree."
],
[
"Srinivas and Babu sparsify-datafree propose to add the outgoing weights of INLINEFORM0 to the weights of a similar neuron INLINEFORM1 to compensate for the removal of INLINEFORM2 . However, we have found that this approach does not work well on NMT networks. We propose instead to compensate for the removal of a neuron by a linear combination of the remaining neurons in the layer. Data-free shrinking assumes for the sake of deriving the update rule that the neuron activation function is linear. We now ask the following question: How can we compensate as well as possible for the loss of neuron INLINEFORM3 such that the impact on the output of the whole network is minimized? Data-free shrinking represents the incoming weight vector of neuron INLINEFORM4 ( INLINEFORM5 ) as linear combination of the incoming weight vectors of the other neurons. The linear factors can be found by satisfying the following linear system: DISPLAYFORM0 ",
"where INLINEFORM0 is matrix INLINEFORM1 without the INLINEFORM2 -th column. In practice, we use the method of ordinary least squares to find INLINEFORM3 because the system may be overdetermined. The idea is that if we mix the outputs of all neurons in the layer by the INLINEFORM4 -weights, we get the output of the INLINEFORM5 -th neuron. The row vector INLINEFORM6 contains the contributions of the INLINEFORM7 -th neuron to each of the neurons in the next layer. Rather than using these connections, we approximate their effect by adding some weight to the outgoing connections of the other neurons. How much weight depends on INLINEFORM8 and the outgoing weights INLINEFORM9 . The factor INLINEFORM10 which we need to add to the outgoing connection of the INLINEFORM11 -th neuron to compensate for the loss of the INLINEFORM12 -th neuron on the INLINEFORM13 -th neuron in the next layer is: DISPLAYFORM0 ",
"Therefore, the update rule for INLINEFORM0 is: DISPLAYFORM0 ",
"In the remainder we will refer to this method as data-free shrinking. Note that we recover the update rule of Srinivas and Babu sparsify-datafree by setting INLINEFORM0 to the INLINEFORM1 -th unit vector. Also note that the error introduced by our shrinking method is due to the fact that we ignore the non-linearity, and that the solution for INLINEFORM2 may not be exact. The method is error-free on linear layers as long as the residuals of the least-squares analysis in Eq. EQREF10 are zero.",
"The terminology of neurons needs some further elaboration for GRU layers which rather consist of update and reset gates and states BIBREF18 . On GRU layers, we treat the states as neurons, i.e. the INLINEFORM0 -th neuron refers to the INLINEFORM1 -th entry in the GRU state vector. Input connections to the gates are included in the incoming weight matrix INLINEFORM2 for estimating INLINEFORM3 in Eq. EQREF10 . Removing neuron INLINEFORM4 in a GRU layer means deleting the INLINEFORM5 -th entry in the states and both gate vectors."
],
[
"Although we find our data-free approach to be a substantial improvement over the methods of Srinivas and Babu sparsify-datafree on NMT networks, it still leads to a non-negligible decline in BLEU score when applied to recurrent GRU layers. Our data-free method uses the incoming weights to identify similar neurons, i.e. neurons expected to have similar activities. This works well enough for simple layers, but the interdependencies between the states and the gates inside gated layers like GRUs or LSTMs are complex enough that redundancies cannot be found simply by looking for similar weights. In the spirit of Babaeizadeh et al. sparsify-noiseout, our data-bound version records neuron activities during training to estimate INLINEFORM0 . We compensate for the removal of the INLINEFORM1 -th neuron by using a linear combination of the output of remaining neurons with similar activity patterns. In each layer, we prune 40 neurons each 450 training iterations until the target layer size is reached. Let INLINEFORM2 be the matrix which holds the records of neuron activities in the layer since the last removal. For example, for the decoder GRU layer, a batch size of 80, and target sentence lengths of 20, INLINEFORM3 has INLINEFORM4 rows and INLINEFORM5 (the number of neurons in the layer) columns. Similarly to Eq. EQREF10 we find interpolation weights INLINEFORM6 using the method of least squares on the following linear system. DISPLAYFORM0 ",
"The update rule for the outgoing weight matrix is the same as for our data-free method (Eq. EQREF12 ). The key difference between data-free and data-bound shrinking is the way INLINEFORM0 is estimated. Data-free shrinking uses the similarities between incoming weights, and data-bound shrinking uses neuron activities recorded during training. Once we select a neuron to remove, we estimate INLINEFORM1 , compensate for the removal, and proceed with the shrunk network. Both methods are prior to any decoding and result in shrunk parameter files which are then loaded to the decoder. Both methods remove neurons rather than single weights.",
"The data-bound algorithm runs gradient-based optimization on the unfolded network. We use the AdaGrad BIBREF20 step rule, a small learning rate of 0.0001, and aggressive step clipping at 0.05 to avoid destroying useful weights which were learned in the individual networks prior to the construction of the unfolded network.",
"Our data-bound algorithm uses a data-bound version of the neuron selection criterion in Eq. EQREF8 which operates on the activity matrix INLINEFORM0 . We search for the pair INLINEFORM1 according the following term and remove neuron INLINEFORM2 . DISPLAYFORM0 "
],
[
"The standard attention-based NMT network architecture BIBREF19 includes three linear layers: the embedding layer in the encoder, and the output and feedback embedding layers in the decoder. We have found that linear layers are particularly easy to shrink using low-rank matrix approximation. As before we denote the incoming weight matrix as INLINEFORM0 and the outgoing weight matrix as INLINEFORM1 . Since the layer is linear, we could directly connect the previous layer with the next layer using the product of both weight matrices INLINEFORM2 . However, INLINEFORM3 may be very large. Therefore, we approximate INLINEFORM4 as a product of two low rank matrices INLINEFORM5 and INLINEFORM6 ( INLINEFORM7 ) where INLINEFORM8 is the desired layer size. A very common way to find such a matrix factorization is using truncated singular value decomposition (SVD). The layer is eventually shrunk by replacing INLINEFORM9 with INLINEFORM10 and INLINEFORM11 with INLINEFORM12 ."
],
[
"The individual NMT systems we use as source for constructing the unfolded networks are trained using AdaDelta BIBREF21 on the Blocks/Theano implementation BIBREF22 , BIBREF23 of the standard attention-based NMT model BIBREF19 with: 1000 dimensional GRU layers BIBREF18 in both the decoder and bidrectional encoder; a single maxout output layer BIBREF24 ; and 620 dimensional embedding layers. We follow Sennrich et al. nmt-bpe and use subword units based on byte pair encoding rather than words as modelling units. Our SGNMT decoder BIBREF25 with a beam size of 12 is used in all experiments. Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15. We report cased BLEU scores calculated with Moses' multi-bleu.pl to be strictly comparable to the evaluation done in the Workshop of Asian Translation (WAT). We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation. We set the vocabulary sizes to 30K for Ja-En and 50K for En-De. We also report the size factor for each model which is the total number of model parameters (sum of all weight matrix sizes) divided by the number of parameters in the original NMT network (86M for Ja-En and 120M for En-De). We choose a widely used, simple ensembling method (prediction averaging) as our baseline. We feel that the prevalence of this method makes it a reasonable baseline for our experiments."
],
[
"The idea of pruning neural networks to improve the compactness of the models dates back more than 25 years BIBREF15 . The literature is therefore vast BIBREF28 . One line of research aims to remove unimportant network connections. The connections can be selected for deletion based on the second-derivative of the training error with respect to the weight BIBREF15 , BIBREF16 , or by a threshold criterion on its magnitude BIBREF29 . See et al. sparsify-nmt confirmed a high degree of weight redundancy in NMT networks.",
"In this work we are interested in removing neurons rather than single connections since we strive to shrink the unfolded network such that it resembles the layout of an individual model. We argued in Sec. SECREF4 that removing neurons rather than connections does not only improve the model size but also the memory footprint and decoding speed. As explained in Sec. SECREF9 , our data-free method is an extension of the approach by Srinivas and Babu sparsify-datafree; our extension performs significantly better on NMT networks. Our data-bound method (Sec. SECREF14 ) is inspired by Babaeizadeh et al. sparsify-noiseout as we combine neurons with similar activities during training, but we use linear combinations of multiple neurons to compensate for the loss of a neuron rather than merging pairs of neurons.",
"Using low rank matrices for neural network compression, particularly approximations via SVD, has been studied widely in the literature BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . These approaches often use low rank matrices to approximate a full rank weight matrix in the original network. In contrast, we shrink an entire linear layer by applying SVD on the product of the incoming and outgoing weight matrices (Sec. SECREF18 ).",
"In this paper we mimicked the output of the high performing but cumbersome ensemble by constructing a large unfolded network, and shrank this network afterwards. Another approach, known as knowledge distillation, uses the large model (the teacher) to generate soft training labels for the smaller student network BIBREF11 , BIBREF12 . The student network is trained by minimizing the cross-entropy to the teacher. This idea has been applied to sequence modelling tasks such as machine translation and speech recognition BIBREF35 , BIBREF13 , BIBREF14 . Our approach can be computationally more efficient as the training set does not have to be decoded by the large teacher network.",
"Junczys-Dowmunt et al. averaging2,averaging1 reported gains from averaging the weight matrices of multiple checkpoints of the same training run. However, our attempts to replicate their approach were not successful. Averaging might work well when the behaviour of corresponding units is similar across networks, but that cannot be guaranteed when networks are trained independently."
],
[
"We have described a generic method for improving the decoding speed and BLEU score of single system NMT. Our approach involves unfolding an ensemble of multiple systems into a single large neural network and shrinking this network by removing redundant neurons. Our best results on Japanese-English either yield a gain of 2.2 BLEU compared to the original single NMT network at about the same decoding speed, or a INLINEFORM0 CPU decoding speed up with only a minor drop in BLEU.",
"The current formulation of unfolding works for networks of the same topology as the concatenation of layers is only possible for analogous layers in different networks. Unfolding and shrinking diverse networks could be possible, for example by applying the technique only to the input and output layers or by some other scheme of finding associations between units in different models, but we leave this investigation to future work as models in NMT ensembles in current research usually have the same topology BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF6 ."
],
[
"This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC grant EP/L027623/1)."
],
[
"Data-free and data-bound shrinking can be interpreted as setting the expected difference between network outputs before and after a removal operation to zero under different assumptions.",
"For simplicity, we focus our probabilistic treatment of shrinking on single layer feedforward networks. Such a network maps an input INLINEFORM0 to an output INLINEFORM1 . The INLINEFORM2 -th output INLINEFORM3 is computed according the following equation DISPLAYFORM0 ",
"where INLINEFORM0 is the incoming weight vector of the INLINEFORM1 -th hidden neuron (denoted as INLINEFORM2 in the main paper) and INLINEFORM3 the outgoing weight matrix of the INLINEFORM4 -dimensional hidden layer. We now remove the INLINEFORM5 -th neuron in the hidden layer and modify the outgoing weights to compensate for the removal: DISPLAYFORM0 ",
"where INLINEFORM0 is the output after the removal operation and INLINEFORM1 are the modified outgoing weights. Our goal is to choose INLINEFORM2 such that the expected error introduced by removing neuron INLINEFORM3 is zero: DISPLAYFORM0 "
]
],
"section_name": [
"Introduction",
"Unfolding KK Networks into a Single Large Neural Network",
"Shrinking the Unfolded Network",
"Data-Free Neuron Removal",
"Data-Bound Neuron Removal",
"Shrinking Embedding Layers with SVD",
"Results",
"Related Work",
"Conclusion",
"Acknowledgments",
"Appendix: Probabilistic Interpretation of Data-Free and Data-Bound Shrinking"
]
} | {
"answers": [
{
"annotation_id": [
"053d2c5d2f5fc00042fbc5e14d35afb99bc3f747",
"eecac044395031bfed9d8f98970518939a479f01"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Our best models on Ja-En.",
"FLOAT SELECTED: Table 6: Our best models on En-De."
],
"extractive_spans": [],
"free_form_answer": "For the test set a BLEU score of 25.7 on Ja-En and 20.7 (2014 test set), 23.1 (2015 test set), and 26.1 (2016 test set) on En-De",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Our best models on Ja-En.",
"FLOAT SELECTED: Table 6: Our best models on En-De."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We have described a generic method for improving the decoding speed and BLEU score of single system NMT. Our approach involves unfolding an ensemble of multiple systems into a single large neural network and shrinking this network by removing redundant neurons. Our best results on Japanese-English either yield a gain of 2.2 BLEU compared to the original single NMT network at about the same decoding speed, or a INLINEFORM0 CPU decoding speed up with only a minor drop in BLEU."
],
"extractive_spans": [
"gain of 2.2 BLEU compared to the original single NMT network"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our best results on Japanese-English either yield a gain of 2.2 BLEU compared to the original single NMT network at about the same decoding speed, or a INLINEFORM0 CPU decoding speed up with only a minor drop in BLEU."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4fcb58a0ded1132082e2c3057d8fce6509c8e8e4",
"8ad176e0b26d8a6d544d2767450ccbf88416fc28"
],
"answer": [
{
"evidence": [
"The individual NMT systems we use as source for constructing the unfolded networks are trained using AdaDelta BIBREF21 on the Blocks/Theano implementation BIBREF22 , BIBREF23 of the standard attention-based NMT model BIBREF19 with: 1000 dimensional GRU layers BIBREF18 in both the decoder and bidrectional encoder; a single maxout output layer BIBREF24 ; and 620 dimensional embedding layers. We follow Sennrich et al. nmt-bpe and use subword units based on byte pair encoding rather than words as modelling units. Our SGNMT decoder BIBREF25 with a beam size of 12 is used in all experiments. Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15. We report cased BLEU scores calculated with Moses' multi-bleu.pl to be strictly comparable to the evaluation done in the Workshop of Asian Translation (WAT). We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation. We set the vocabulary sizes to 30K for Ja-En and 50K for En-De. We also report the size factor for each model which is the total number of model parameters (sum of all weight matrix sizes) divided by the number of parameters in the original NMT network (86M for Ja-En and 120M for En-De). We choose a widely used, simple ensembling method (prediction averaging) as our baseline. We feel that the prevalence of this method makes it a reasonable baseline for our experiments."
],
"extractive_spans": [
"simple ensembling method (prediction averaging)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We choose a widely used, simple ensembling method (prediction averaging) as our baseline."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The individual NMT systems we use as source for constructing the unfolded networks are trained using AdaDelta BIBREF21 on the Blocks/Theano implementation BIBREF22 , BIBREF23 of the standard attention-based NMT model BIBREF19 with: 1000 dimensional GRU layers BIBREF18 in both the decoder and bidrectional encoder; a single maxout output layer BIBREF24 ; and 620 dimensional embedding layers. We follow Sennrich et al. nmt-bpe and use subword units based on byte pair encoding rather than words as modelling units. Our SGNMT decoder BIBREF25 with a beam size of 12 is used in all experiments. Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15. We report cased BLEU scores calculated with Moses' multi-bleu.pl to be strictly comparable to the evaluation done in the Workshop of Asian Translation (WAT). We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation. We set the vocabulary sizes to 30K for Ja-En and 50K for En-De. We also report the size factor for each model which is the total number of model parameters (sum of all weight matrix sizes) divided by the number of parameters in the original NMT network (86M for Ja-En and 120M for En-De). We choose a widely used, simple ensembling method (prediction averaging) as our baseline. We feel that the prevalence of this method makes it a reasonable baseline for our experiments."
],
"extractive_spans": [
"a widely used, simple ensembling method (prediction averaging) "
],
"free_form_answer": "",
"highlighted_evidence": [
"We choose a widely used, simple ensembling method (prediction averaging) as our baseline. We feel that the prevalence of this method makes it a reasonable baseline for our experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1e53fc8418cedda42db9ac984a263b74704befe2",
"c50a083d2df876578b67ddb255841f4a098356cb"
],
"answer": [
{
"evidence": [
"The individual NMT systems we use as source for constructing the unfolded networks are trained using AdaDelta BIBREF21 on the Blocks/Theano implementation BIBREF22 , BIBREF23 of the standard attention-based NMT model BIBREF19 with: 1000 dimensional GRU layers BIBREF18 in both the decoder and bidrectional encoder; a single maxout output layer BIBREF24 ; and 620 dimensional embedding layers. We follow Sennrich et al. nmt-bpe and use subword units based on byte pair encoding rather than words as modelling units. Our SGNMT decoder BIBREF25 with a beam size of 12 is used in all experiments. Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15. We report cased BLEU scores calculated with Moses' multi-bleu.pl to be strictly comparable to the evaluation done in the Workshop of Asian Translation (WAT). We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation. We set the vocabulary sizes to 30K for Ja-En and 50K for En-De. We also report the size factor for each model which is the total number of model parameters (sum of all weight matrix sizes) divided by the number of parameters in the original NMT network (86M for Ja-En and 120M for En-De). We choose a widely used, simple ensembling method (prediction averaging) as our baseline. We feel that the prevalence of this method makes it a reasonable baseline for our experiments."
],
"extractive_spans": [
" Japanese-English (Ja-En) ASPEC data set BIBREF26",
"WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15. We report cased BLEU scores calculated with Moses' multi-bleu.pl to be strictly comparable to the evaluation done in the Workshop of Asian Translation (WAT). We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The individual NMT systems we use as source for constructing the unfolded networks are trained using AdaDelta BIBREF21 on the Blocks/Theano implementation BIBREF22 , BIBREF23 of the standard attention-based NMT model BIBREF19 with: 1000 dimensional GRU layers BIBREF18 in both the decoder and bidrectional encoder; a single maxout output layer BIBREF24 ; and 620 dimensional embedding layers. We follow Sennrich et al. nmt-bpe and use subword units based on byte pair encoding rather than words as modelling units. Our SGNMT decoder BIBREF25 with a beam size of 12 is used in all experiments. Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15. We report cased BLEU scores calculated with Moses' multi-bleu.pl to be strictly comparable to the evaluation done in the Workshop of Asian Translation (WAT). We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation. We set the vocabulary sizes to 30K for Ja-En and 50K for En-De. We also report the size factor for each model which is the total number of model parameters (sum of all weight matrix sizes) divided by the number of parameters in the original NMT network (86M for Ja-En and 120M for En-De). We choose a widely used, simple ensembling method (prediction averaging) as our baseline. We feel that the prevalence of this method makes it a reasonable baseline for our experiments."
],
"extractive_spans": [
"Japanese-English (Ja-En) ASPEC data set BIBREF26",
"WMT data set for English-German (En-De)",
"news-test2014",
"news-test2015 and news-test2016"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15.",
"We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"669f5468a4743cde6e93c194280e9a39bdd5ef5c",
"a707a2f793b9cb75038fd76c35c092335f14ae53"
],
"answer": [
{
"evidence": [
"The individual NMT systems we use as source for constructing the unfolded networks are trained using AdaDelta BIBREF21 on the Blocks/Theano implementation BIBREF22 , BIBREF23 of the standard attention-based NMT model BIBREF19 with: 1000 dimensional GRU layers BIBREF18 in both the decoder and bidrectional encoder; a single maxout output layer BIBREF24 ; and 620 dimensional embedding layers. We follow Sennrich et al. nmt-bpe and use subword units based on byte pair encoding rather than words as modelling units. Our SGNMT decoder BIBREF25 with a beam size of 12 is used in all experiments. Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15. We report cased BLEU scores calculated with Moses' multi-bleu.pl to be strictly comparable to the evaluation done in the Workshop of Asian Translation (WAT). We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation. We set the vocabulary sizes to 30K for Ja-En and 50K for En-De. We also report the size factor for each model which is the total number of model parameters (sum of all weight matrix sizes) divided by the number of parameters in the original NMT network (86M for Ja-En and 120M for En-De). We choose a widely used, simple ensembling method (prediction averaging) as our baseline. We feel that the prevalence of this method makes it a reasonable baseline for our experiments."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The individual NMT systems we use as source for constructing the unfolded networks are trained using AdaDelta BIBREF21 on the Blocks/Theano implementation BIBREF22 , BIBREF23 of the standard attention-based NMT model BIBREF19 with: 1000 dimensional GRU layers BIBREF18 in both the decoder and bidrectional encoder; a single maxout output layer BIBREF24 ; and 620 dimensional embedding layers. We follow Sennrich et al. nmt-bpe and use subword units based on byte pair encoding rather than words as modelling units. Our SGNMT decoder BIBREF25 with a beam size of 12 is used in all experiments. Our primary corpus is the Japanese-English (Ja-En) ASPEC data set BIBREF26 . We select a subset of 500K sentence pairs to train our models as suggested by Neubig et al. sys-neubig-wat15. We report cased BLEU scores calculated with Moses' multi-bleu.pl to be strictly comparable to the evaluation done in the Workshop of Asian Translation (WAT). We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation. We set the vocabulary sizes to 30K for Ja-En and 50K for En-De. We also report the size factor for each model which is the total number of model parameters (sum of all weight matrix sizes) divided by the number of parameters in the original NMT network (86M for Ja-En and 120M for En-De). We choose a widely used, simple ensembling method (prediction averaging) as our baseline. We feel that the prevalence of this method makes it a reasonable baseline for our experiments."
],
"extractive_spans": [
"English-German (En-De)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also apply our method to the WMT data set for English-German (En-De), using the news-test2014 as a development set, and keeping news-test2015 and news-test2016 as test sets. En-De BLEU scores are computed using mteval-v13a.pl as in the WMT evaluation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What were the performance results of their network?",
"What were the baselines?",
"What dataset is used?",
"Do they explore other language pairs?"
],
"question_id": [
"73d87f6ead32653a518fbe8cdebd81b4a3ffcac0",
"fda47c68fd5f7b44bd539f83ded5882b96c36dd7",
"643645e02ffe8fde45918615ec92013a035d1b92",
"a994cc18046912a8c9328dc572f4e4310736c0e2"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Unfolding mimics the output of the ensemble of two single layer feedforward networks.",
"Figure 2: General formula for unfolding weight matrices. The set InnerLayers := [2, D− 1] includes all layers except the input, output, and bias layer.",
"Table 1: Shrinking layers of the unfolded network on Ja-En to their original size.",
"Table 2: Compensating for neuron removal in the data-bound algorithm. Row (d) corresponds to row (f) in Tab. 1.",
"Table 3: Time measurements on Ja-En. Layers are shrunk to their size in the original NMT model.",
"Table 5: Our best models on Ja-En.",
"Figure 3: Impact of shrinking on the BLEU score.",
"Table 4: Layer sizes of our setups for Ja-En.",
"Table 6: Our best models on En-De."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table5-1.png",
"7-Figure3-1.png",
"7-Table4-1.png",
"8-Table6-1.png"
]
} | [
"What were the performance results of their network?"
] | [
[
"1704.03279-7-Table5-1.png",
"1704.03279-8-Table6-1.png",
"1704.03279-Conclusion-0"
]
] | [
"For the test set a BLEU score of 25.7 on Ja-En and 20.7 (2014 test set), 23.1 (2015 test set), and 26.1 (2016 test set) on En-De"
] | 242 |
1901.05389 | Location, Occupation, and Semantics based Socioeconomic Status Inference on Twitter | The socioeconomic status of people depends on a combination of individual characteristics and environmental variables, thus its inference from online behavioral data is a difficult task. Attributes like user semantics in communication, habitat, occupation, or social network are all known to be determinant predictors of this feature. In this paper we propose three different data collection and combination methods to first estimate and, in turn, infer the socioeconomic status of French Twitter users from their online semantics. Our methods are based on open census data, crawled professional profiles, and remotely sensed, expert annotated information on living environment. Our inference models reach similar performance of earlier results with the advantage of relying on broadly available datasets and of providing a generalizable framework to estimate socioeconomic status of large numbers of Twitter users. These results may contribute to the scientific discussion on social stratification and inequalities, and may fuel several applications. | {
"paragraphs": [
[
"Online social networks have become one of the most disruptive communication platforms, as everyday billions of individuals use them to interact with each other. Their penetration in our everyday lives seems ever-growing and has in turn generated a massive volume of publicly available data open to analysis. The digital footprints left across these multiple media platforms provide us with a unique source to study and understand how the linguistic phenotype of a given user is related to social attributes such as socioeconomic status (SES).",
"The quantification and inference of SES of individuals is a long lasting question in the social sciences. It is a rather difficult problem as it may depend on a combination of individual characteristics and environmental variables BIBREF0 . Some of these features can be easier to assess like income, gender, or age whereas others, relying to some degree on self-definition and sometimes entangled with privacy issues, are harder to assign like ethnicity, occupation, education level or home location. Furthermore, individual SES correlates with other individual or network attributes, as users tend to build social links with others of similar SES, a phenomenon known as status homophily BIBREF1 , arguably driving the observed stratification of society BIBREF2 . At the same time, shared social environment, similar education level, and social influence have been shown to jointly lead socioeconomic groups to exhibit stereotypical behavioral patterns, such as shared political opinion BIBREF3 or similar linguistic patterns BIBREF4 . Although these features are entangled and causal relation between them is far from understood, they appear as correlations in the data.",
"Datasets recording multiple characteristics of human behaviour are more and more available due to recent developments in data collection technologies and increasingly popular online platforms and personal digital devices. The automatic tracking of online activities, commonly associated with profile data and meta-information; the precise recording of daily activities, interaction dynamics and mobility patterns collected through mobile personal devices; together with the detailed and expert annotated census data all provide new grounds for the inference of individual features or behavioral patterns BIBREF5 . The exploitation of these data sources has already been proven to be fruitful as cutting edge recommendation systems, advanced methods for health record analysis, or successful prediction tools for social behaviour heavily rely on them BIBREF6 . Nevertheless, despite the available data, some inference tasks, like individual SES prediction, remain an open challenge.",
"The precise inference of SES would contribute to overcome several scientific challenges and could potentially have several commercial applications BIBREF7 . Further, robust SES inference would provide unique opportunities to gain deeper insights on socioeconomic inequalities BIBREF8 , social stratification BIBREF2 , and on the driving mechanisms of network evolution, such as status homophily or social segregation.",
"In this work, we take a horizontal approach to this problem and explore various ways to infer the SES of a large sample of social media users. We propose different data collection and combination strategies using open, crawlable, or expert annotated socioeconomic data for the prediction task. Specifically, we use an extensive Twitter dataset of 1.3M users located in France, all associated with their tweets and profile information; 32,053 of them having inferred home locations. Individual SES is estimated by relying on three separate datasets, namely socioeconomic census data; crawled profession information and expert annotated Google Street View images of users' home locations. Each of these datasets is then used as ground-truth to infer the SES of Twitter users from profile and semantic features similar to BIBREF9 . We aim to explore and assess how the SES of social media users can be obtained and how much the inference problem depends on annotation and the user's individual and linguistic attributes.",
"We provide in Section SECREF2 an overview of the related literature to contextualize the novelty of our work. In Section SECREF3 we provide a detailed description of the data collection and combination methods. In Section SECREF4 we introduce the features extracted to solve the SES inference problem, with results summarized in Section SECREF5 . Finally, in Section SECREF6 and SECREF7 we conclude our paper with a brief discussion of the limitations and perspectives of our methods."
],
[
"There is a growing effort in the field to combine online behavioral data with census records, and expert annotated information to infer social attributes of users of online services. The predicted attributes range from easily assessable individual characteristics such as age BIBREF10 , or occupation BIBREF9 , BIBREF11 , BIBREF12 , BIBREF13 to more complex psychological and sociological traits like political affiliation BIBREF14 , personality BIBREF15 , or SES BIBREF16 , BIBREF9 .",
"Predictive features proposed to infer the desired attributes are also numerous. In case of Twitter, user information can be publicly queried within the limits of the public API BIBREF17 . User characteristics collected in this way, such as profile features, tweeting behavior, social network and linguistic content have been used for prediction, while other inference methods relying on external data sources such as website traffic data BIBREF18 or census data BIBREF19 , BIBREF20 have also proven effective. Nonetheless, only recent works involve user semantics in a broader context related to social networks, spatiotemporal information, and personal attributes BIBREF12 , BIBREF9 , BIBREF11 , BIBREF21 .",
"The tradition of relating SES of individuals to their language dates back to the early stages of sociolinguistics where it was first shown that social status reflected through a person's occupation is a determinant factor in the way language is used BIBREF22 . This line of research was recently revisited by Lampos et al. to study the SES inference problem on Twitter. In a series of works BIBREF12 , BIBREF9 , BIBREF11 , BIBREF21 , the authors applied Gaussian Processes to predict user income, occupation and socioeconomic class based on demographic, psycho-linguistic features and a standardized job classification taxonomy which mapped Twitter users to their professional occupations. The high predictive performance has proven this concept with INLINEFORM0 for income prediction, and a precision of INLINEFORM1 for 9-ways SOC classification, and INLINEFORM2 for binary SES classification. Nevertheless, the models developed by the authors are learned by relying on datasets, which were manually labeled through an annotation process crowdsourced through Amazon Mechanical Turk at a high monetary cost. Although the labeled data has been released and provides the base for new extensions BIBREF10 , it has two potential shortfalls that need to be acknowledged. First, the method requires access to a detailed job taxonomy, in this case specific to England, which hinders potential extensions of this line of work to other languages and countries. Furthermore, the language to income pipeline seems to show some dependency on the sample of users that actively chose to disclose their profession in their Twitter profile. Features obtained on this set might not be easily recovered from a wider sample of Twitter users. This limits the generalization of these results without assuming a costly acquisition of a new dataset."
],
[
"Our first motivation in this study was to overcome earlier limitations by exploring alternative data collection and combination methods. We provide here three ways to estimate the SES of Twitter users by using (a) open census data, (b) crawled and manually annotated data on professional skills and occupation, and (c) expert annotated data on home location Street View images. We provide here a collection of procedures that enable interested researchers to introduce predictive performance and scalability considerations when interested in developing language to SES inference pipelines. In the following we present in detail all of our data collection and combination methods."
],
[
"Our central dataset was collected from Twitter, an online news and social networking service. Through Twitter, users can post and interact by “tweeting\" messages with restricted length. Tweets may come with several types of metadata including information about the author's profile, the detected language as well as where and when the tweet was posted. Specifically, we recorded 90,369,215 tweets written in French, posted by 1.3 Million users in the timezones GMT and GMT+1 over one year (between August 2014 to July 2015) BIBREF23 . These tweets were obtained via the Twitter Powertrack API provided by Datasift with an access rate of INLINEFORM0 . Using this dataset we built several other corpora:",
"To find users with a representative home location we followed the method published in BIBREF24 , BIBREF25 . As a bottom line, we concentrated on INLINEFORM0 users who posted at least five geolocated tweets with valid GPS coordinates, with at least three of them within a valid census cell (for definition see later), and over a longer period than seven days. Applying these filters we obtained 1,000,064 locations from geolocated tweets. By focusing on the geolocated users, we kept those with limited mobility, i.e., with median distance between locations not greater than 30 km, with tweets posted at places and times which did not require travel faster than 130 INLINEFORM1 (maximum speed allowed within France), and with no more than three tweets within a two seconds window. We further filtered out tweets with coordinates corresponding to locations referring to places (such as “Paris\" or “France\"). Thus, we removed locations that didn't exactly correspond to GPS-tagged tweets and also users which were most likely bots. Home location was estimated by the most frequent location for a user among all coordinates he visited. This way we obtained INLINEFORM2 users, each associated with a unique home location. Finally, we collected the latest INLINEFORM3 tweets from the timeline of all of geolocated users using the Twitter public API BIBREF17 . Note, that by applying these consecutive filters we obtained a more representative population as the Gini index, indicating overall socioeconomic inequalities, was INLINEFORM4 before filtering become INLINEFORM5 due to the filtering methods, which is closer to the value reported by the World Bank ( INLINEFORM6 ) BIBREF26 .",
"To verify our results, we computed the average weekly distance from each recorded location of a user to his inferred home location defined either as its most frequent location overall or among locations posted outside of work-hours from 9AM to 6PM (see Fig. FIGREF4 a). This circadian pattern displays great similarity to earlier results BIBREF25 with two maxima, roughly corresponding to times at the workplace, and a local minimum at 1PM due to people having lunch at home. We found that this circadian pattern was more consistent with earlier results BIBREF25 when we considered all geolocated tweets (“All\" in Fig. FIGREF4 a) rather than only tweets including “home-related\" expressions (“Night\" in Fig. FIGREF4 a). To further verify the inferred home locations, for a subset of 29,389 users we looked for regular expressions in their tweets that were indicative of being at home BIBREF25 , such as “chez moi\", “bruit\", “dormir\" or “nuit\". In Fig. FIGREF4 c we show the temporal distribution of the rate of the word “dormir\" at the inferred home locations. This distribution appears with a peak around 10PM, which is very different from the overall distribution of geolocated tweets throughout the day considering any location (see Fig. FIGREF4 b).",
"To obtain meaningful linguistic data we pre-processed the incoming tweet streams in several ways. As our central question here deals with language semantics of individuals, re-tweets do not bring any additional information to our study, thus we removed them by default. We also removed any expressions considered to be semantically meaningless like URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags (denoted by the # symbol) to simplify later post-processing. In addition, as a last step of textual pre-processing, we downcased and stripped the punctuation from the text of every tweet."
],
[
"Our first method to associate SES to geolocated users builds on an open census income dataset at intra-urban level for France BIBREF27 . Obtained from 2010 French tax returns, it was released in December 2016 by the National Institute of Statistics and Economic Studies (INSEE) of France. This dataset collects detailed socioeconomic information of individuals at the census block level (called IRIS), which are defined as territorial cells with varying size but corresponding to blocks of around INLINEFORM0 inhabitants, as shown in Fig. FIGREF7 for greater Paris. For each cell, the data records the deciles of the income distribution of inhabitants. Note that the IRIS data does not provide full coverage of the French territory, as some cells were not reported to avoid identification of individuals (in accordance with current privacy laws), or to avoid territorial cells of excessive area. Nevertheless, this limitation did not hinder our results significantly as we only considered users who posted at least three times from valid IRIS cells, as explained in Section SECREF3 .",
"To associate a single income value to each user, we identified the cell of their estimated home locations and assigned them with the median of the corresponding income distribution. Thus we obtained an average socioeconomic indicator for each user, which was distributed heterogeneously in accordance with Pareto's law BIBREF28 . This is demonstrated in Fig. FIGREF15 a, where the INLINEFORM0 cumulative income distributions as the function of population fraction INLINEFORM1 appears as a Lorentz-curve with area under the diagonal proportional to socioeconomic inequalities. As an example, Fig. FIGREF7 depicts the spatial distribution of INLINEFORM2 users with inferred home locations in IRIS cells located in central Paris and colored as the median income."
],
[
"Earlier studies BIBREF9 , BIBREF11 , BIBREF12 demonstrated that annotated occupation information can be effectively used to derive precise income for individuals and infer therefore their SES. However, these methods required a somewhat selective set of Twitter users as well as an expensive annotation process by hiring premium annotators e.g. from Amazon Mechanical Turk. Our goal here was to obtain the occupations for a general set of Twitter users without the involvement of annotators, but by collecting data from parallel online services.",
"As a second method to estimate SES, we took a sample of Twitter users who mentioned their LinkedIn BIBREF29 profile url in their tweets or Twitter profile. Using these pointers we collected professional profile descriptions from LinkedIn by relying on an automatic crawler mainly used in Search Engine Optimization (SEO) tasks BIBREF30 . We obtained INLINEFORM0 Twitter/LinkedIn users all associated with their job title, professional skills and profile description. Apart from the advantage of working with structured data, professional information extracted from LinkedIn is significantly more reliable than Twitter's due to the high degree of social scrutiny to which each profile is exposed BIBREF31 .",
"To associate income to Twitter users with LinkedIn profiles, we matched them with a given salary based on their reported profession and an occupational salary classification table provided by INSEE BIBREF32 . Due to the ambiguous naming of jobs and to acknowledge permanent/non-permanent, senior/junior contract types we followed three strategies for the matching. In INLINEFORM0 of the cases we directly associated the reported job titles to regular expressions of an occupation. In INLINEFORM1 of the cases we used string sequencing methods borrowed from DNA-sequencing BIBREF33 to associate reported and official names of occupations with at least INLINEFORM2 match. For the remaining INLINEFORM3 of users we directly inspected profiles. The distribution of estimated salaries reflects the expected income heterogeneities as shown in Fig. FIGREF15 . Users were eventually assigned to one of two SES classes based on whether their salary was higher or lower than the average value of the income distribution. Also note, that LinkedIn users may not be representative of the whole population. We discuss this and other types of poential biases in Section SECREF6 ."
],
[
"Finally, motivated by recent remote sensing techniques, we sought to estimate SES via the analysis of the urban environment around the inferred home locations. Similar methodology has been lately reported by the remote sensing community BIBREF34 to predict socio-demographic features of a given neighborhood by analyzing Google Street View images to detect different car models, or to predict poverty rates across urban areas in Africa from satellite imagery BIBREF35 . Driven by this line of work, we estimated the SES of geolocated Twitter users as follows:",
"Using geolocated users identified in Section SECREF3 , we further filtered them to obtain a smaller set of users with more precise inferred home locations. We screened all of their geotagged tweets and looked for regular expressions determining whether or not a tweet was sent from home BIBREF25 . As explained in Section SECREF3 , we exploited that “home-suspected\" expressions appeared with a particular temporal distribution (see Fig. FIGREF4 c) since these expressions were used during the night when users are at home. This selection yielded INLINEFORM0 users mentioning “home-suspected\" expressions regularly at their inferred home locations.",
"In order to filter out inferred home locations not in urban/residential areas, we downloaded via Google Maps Static API BIBREF36 a satellite view in a INLINEFORM0 radius around each coordinate (for a sample see Fig. FIGREF12 a). To discriminate between residential and non-residential areas, we built on land use classifier BIBREF37 using aerial imagery from the UC Merced dataset BIBREF38 . This dataset contains 2100 INLINEFORM1 INLINEFORM2 aerial RGB images over 21 classes of different land use (for a pair of sample images see Fig. FIGREF12 b). To classify land use a CaffeNet architecture was trained which reached an accuracy over INLINEFORM3 . Here, we instantiated a ResNet50 network using keras BIBREF39 pre-trained on ImageNet BIBREF40 where all layers except the last five were frozen. The network was then trained with 10-fold cross validation achieving a INLINEFORM4 accuracy after the first 100 epochs. We used this model to classify images of the estimated home location satellite views (cf. Figure FIGREF12 a) and kept those which were identified as residential areas (see Fig. FIGREF12 b, showing the activation of the two first hidden layers of the trained model). This way INLINEFORM5 inferred home locations were discarded.",
"Next we aimed to estimate SES from architectural/urban features associated to the home locations. Thus, for each home location we collected two additional satellite views at different resolutions as well as six Street View images, each with a horizontal view of approximately INLINEFORM0 . We randomly selected a sample of INLINEFORM1 locations and involved architects to assign a SES score (from 1 to 9) to a sample set of selected locations based on the satellite and Street View around it (both samples had 333 overlapping locations). For validation, we took users from each annotated SES class and computed the distribution of their incomes inferred from the IRIS census data (see Section SECREF6 ). Violin plots in Fig. FIGREF12 d show that in expert annotated data, as expected, the inferred income values were positively correlated with the annotated SES classes. Labels were then categorized into two socioeconomic classes for comparison purposes. All in all, both annotators assigned the same label to the overlapping locations in INLINEFORM2 of samples.",
"To solve the SES inference problem we used the above described three datasets (for a summary see Table TABREF14 ). We defined the inference task as a two-way classification problem by dividing the user set of each dataset into two groups. For the census and occupation datasets the lower and higher SES classes were separated by the average income computed from the whole distribution, while in the case of the expert annotated data we assigned people from the lowest five SES labels to the lower SES class in the two-way task. The relative fractions of people assigned to the two classes are depicted in Fig. FIGREF15 b for each dataset and summarized in Table TABREF14 ."
],
[
"Using the user profile information and tweets collected from every account's timeline, we built a feature set for each user, similar to Lampos et al. BIBREF9 . We categorized features into two sets, one containing shallow features directly observable from the data, while the other was obtained via a pipeline of data processing methods to capture semantic user features."
],
[
"The user level features are based on the general user information or aggregated statistics about the tweets BIBREF11 . We therefore include general ordinal values such as the number and rate of retweets, mentions, and coarse-grained information about the social network of users (number of friends, followers, and ratio of friends to followers). Finally we vectorized each user's profile description and tweets and selected the top 450 and 560 1-grams and 2-grams, respectively, observed through their accounts (where the rank of a given 1-gram was estimated via tf-idf BIBREF41 )."
],
[
"To represent textual information, in addition to word count data, we used topic models to encode coarse-grained information on the content of the tweets of a user, similar to BIBREF9 . This enabled us to easily interpret the relation between semantic and socioeconomic features. Specifically, we started by training a word2vec model BIBREF42 on the whole set of tweets (obtained in the 2014-2015 timeframe) by using the skip-gram model and negative sampling with parameters similar to BIBREF11 , BIBREF10 . To scale up the analysis, the number of dimensions for the embedding was kept at 50. This embedded words in the initial dataset in a INLINEFORM0 vector space.",
"Eventually we extracted conversation topics by running a spectral clustering algorithm on the word-to-word similarity matrix INLINEFORM0 with INLINEFORM1 vocabulary size and elements defined as the INLINEFORM2 cosine similarity between word vectors. Here INLINEFORM3 is a vector of a word INLINEFORM4 in the embedding, INLINEFORM5 is the dot product of vectors, and INLINEFORM6 is the INLINEFORM7 norm of a vector. This definition allows for negative entries in the matrix to cluster, which were set to null in our case. This is consistent with the goal of the clustering procedure as negative similarities shouldn't encode dissimilarity between pairs of words but orthogonality between the embeddings. This procedure was run for 50, 100 and 200 clusters and allowed the homogeneous distribution of words among clusters (hard clustering). The best results were obtained with 100 topics in the topic model. Finally, we manually labeled topics based on the words assigned to them, and computed the topic-to-topic correlation matrix shown in Fig. FIGREF18 . There, after block diagonalization, we found clearly correlated groups of topics which could be associated to larger topical areas such as communication, advertisement or soccer.",
"As a result we could compute a representative topic distribution for each user, defined as a vector of normalized usage frequency of words from each topic. Also note that the topic distribution for a given user was automatically obtained as it depends only on the set of tweets and the learned topic clusters without further parametrization.",
"To demonstrate how discriminative the identified topics were in terms of the SES of users we associated to each user the 9th decile value of the income distribution corresponding to the census block of their home location and computed for each labelled topic the average income of users depending on whether or not they mentioned the given topic. Results in Fig. FIGREF19 demonstrates that topics related to politics, technology or culture are more discussed by people with higher income, while other topics associated to slang, insults or informal abbreviations are more used by people of lower income. These observable differences between the average income of people, who use (or not) words from discriminative topics, demonstrates well the potential of word topic clustering used as features for the inference of SES. All in all, each user in our dataset was assigned with a 1117 feature vector encoding the lexical and semantic profile she displayed on Twitter. We did not apply any further feature selection as the distribution of importance of features appeared rather smooth (not shown here). It did not provided evident ways to identify a clear set of particularly determinant features, but rather indicated that the combination of them were important."
],
[
"In order to assess the degree to which linguistic features can be used for discriminating users by their socioeconomic class, we trained with these feature sets different learning algorithms. Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. Training a decision tree learning algorithm involves the generation of a series of rules, split points or nodes ordered in a tree-like structure enabling the prediction of a target output value based on the values of the input features. More specifically, XGBoost, as an ensemble technique, is trained by sequentially adding a high number of individually weak but complementary classifiers to produce a robust estimator: each new model is built to be maximally correlated with the negative gradient of the loss function associated with the model assembly BIBREF44 . To evaluate the performance of this method we benchmarked it against more standard ensemble learning algorithms such as AdaBoost and Random Forest.",
"For each socioeconomic dataset, we trained our models by using 75% of the available data for training and the remaining 25% for testing. During the training phase, the training data undergoes a INLINEFORM0 -fold inner cross-validation, with INLINEFORM1 , where all splits are computed in a stratified manner to get the same ratio of lower to higher SES users. The four first blocks were used for inner training and the remainder for inner testing. This was repeated ten times for each model so that in the end, each model's performance on the validation set was averaged over 50 samples. For each model, the parameters were fine-tuned by training 500 different models over the aforementioned splits. The selected one was that which gave the best performance on average, which was then applied to the held-out test set. This is then repeated through a 5-fold outer cross-validation.",
"In terms of prediction score, we followed a standard procedure in the literature BIBREF45 and evaluated the learned models by considering the area under the receiver operating characteristic curve (AUC). This metric can be thought as the probability that a classifier ranks a randomly chosen positive instance higher than a randomly chosen negative one BIBREF44 .",
"This procedure was applied to each of our datasets. The obtained results are shown in Fig. FIGREF21 and in Table TABREF22 .",
"As a result, we first observed that XGBoost consistently provided top prediction scores when compared to AdaBoost and Random Forest (all performance scores are summarised in Table TABREF20 ). We hence used it for our predictions in the remainder of this study. We found that the LinkedIn data was the best, with INLINEFORM0 , to train a model to predict SES of people based on their semantic features. It provided a INLINEFORM1 increase in performance as compared to the census based inference with INLINEFORM2 , and INLINEFORM3 relative to expert annotated data with INLINEFORM4 . Thus we can conclude that there seem to be a trade-off between scalability and prediction quality, as while the occupation dataset provided the best results, it seems unlikely to be subject to any upscaling due to the high cost of obtaining a clean dataset. Relying on location to estimate SES seems to be more likely to benefit from such an approach, though at the cost of an increased number of mislabelled users in the dataset. Moreover, the annotator's estimation of SES using Street View at each home location seems to be hindered by the large variability of urban features. Note that even though inter-agreement is 76%, the Cohen's kappa score for annotator inter-agreement is low at 0.169. Furthermore, we remark that the expert annotated pipeline was also subject to noise affecting the home location estimations, which potentially contributed to the lowest predictive performance.",
"Finally, it should also be noted that following recent work by Aletras and Chamberlain in BIBREF21 , we tested our model by extending the feature set with the node2vec embedding of users computed from the mutual mention graph of Twitter. Nevertheless, in our setting, it did not increase the overall predictive performance of the inference pipeline. We hence didn't include in the feature set for the sake of simplicity."
],
[
"In this work we combined multiple datasets collected from various sources. Each of them came with some bias due to the data collection and post-treatment methods or the incomplete set of users. These biases may limit the success of our inference, thus their identification is important for the interpretation and future developments of our framework.",
" INLINEFORM0 Location data: Although we designed very strict conditions for the precise inference of home locations of geolocated users, this process may have some uncertainty due to outlier behaviour. Further bias may be induced by the relatively long time passed between the posting of the location data and of the tweets collection of users.",
" INLINEFORM0 Census data: As we already mentioned the census data does not cover the entire French territory as it reports only cells with close to INLINEFORM1 inhabitants. This may introduce biases in two ways: by limiting the number of people in our sample living in rural areas, and by associating income with large variation to each cell. While the former limit had marginal effects on our predictions, as Twitter users mostly live in urban areas, we addressed the latter effect by associating the median income to users located in a given cell.",
" INLINEFORM0 Occupation data: LinkedIn as a professional online social network is predominantly used by people from IT, business, management, marketing or other expert areas, typically associated with higher education levels and higher salaries. Moreover, we could observe only users who shared their professional profiles on Twitter, which may further biased our training set. In terms of occupational-salary classification, the data in BIBREF32 was collected in 2010 thus may not contain more recent professions. These biases may induce limits in the representativeness of our training data and thus in the predictions' precision. However, results based on this method of SES annotation performed best in our measurements, indicating that professions are among the most predictive features of SES, as has been reported in BIBREF9 .",
" INLINEFORM0 Annotated home locations: The remote sensing annotation was done by experts and their evaluation was based on visual inspection and biased by some unavoidable subjectivity. Although their annotations were cross-referenced and found to be consistent, they still contained biases, like over-representative middle classes, which somewhat undermined the prediction task based on this dataset.",
"Despite these shortcomings, using all the three datasets we were able to infer SES with performances close to earlier reported results, which were based on more thoroughly annotated datasets. Our results, and our approach of using open, crawlable, or remotely sensed data highlights the potential of the proposed methodologies."
],
[
"In this work we proposed a novel methodology for the inference of the SES of Twitter users. We built our models combining information obtained from numerous sources, including Twitter, census data, LinkedIn and Google Maps. We developed precise methods of home location inference from geolocation, novel annotation of remotely sensed images of living environments, and effective combination of datasets collected from multiple sources. As new scientific results, we demonstrated that within the French Twitter space, the utilization of words in different topic categories, identified via advanced semantic analysis of tweets, can discriminate between people of different income. More importantly, we presented a proof-of-concept that our methods are competitive in terms of SES inference when compared to other methods relying on domain specific information.",
"We can identify several future directions and applications of our work. First, further development of data annotation of remotely sensed information is a promising direction. Note that after training, our model requires as input only information, which can be collected exclusively from Twitter, without relying on other data sources. This holds a large potential in terms of SES inference of larger sets of Twitter users, which in turn opens the door for studies to address population level correlations of SES with language, space, time, or the social network. This way our methodology has the merit not only to answer open scientific questions, but also to contribute to the development of new applications in recommendation systems, predicting customer behavior, or in online social services."
],
[
"We thank J-Ph. Magué, J-P. Chevrot, D. Seddah, D. Carnino and E. De La Clergerie for constructive discussions and for their advice on data management and analysis. We are grateful to J. Altnéder and M. Hunyadi for their contributions as expert architects for data annotation."
]
],
"section_name": [
"Introduction",
"Related works",
"Data collection and combination",
"Twitter corpus",
"Census data",
"Occupation data",
"Expert annotated home location data",
"Feature selection",
"User Level Features",
"Linguistic features",
"Results",
"Limitations",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"323df80cae66f9cc4f8924d36ae418054254868e",
"97a4a36f239512280e47d2aca327415807d29d2a",
"b3c3a32e583c0d075dbba002a043ef48cbc523cf"
],
"answer": [
{
"evidence": [
"To obtain meaningful linguistic data we pre-processed the incoming tweet streams in several ways. As our central question here deals with language semantics of individuals, re-tweets do not bring any additional information to our study, thus we removed them by default. We also removed any expressions considered to be semantically meaningless like URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags (denoted by the # symbol) to simplify later post-processing. In addition, as a last step of textual pre-processing, we downcased and stripped the punctuation from the text of every tweet."
],
"extractive_spans": [],
"free_form_answer": "They removed retweets, URLs, emoticons, mentions of other users, hashtags; lowercased the text and removed the punctuation.",
"highlighted_evidence": [
"As our central question here deals with language semantics of individuals, re-tweets do not bring any additional information to our study, thus we removed them by default. We also removed any expressions considered to be semantically meaningless like URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags (denoted by the # symbol) to simplify later post-processing. In addition, as a last step of textual pre-processing, we downcased and stripped the punctuation from the text of every tweet."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To obtain meaningful linguistic data we pre-processed the incoming tweet streams in several ways. As our central question here deals with language semantics of individuals, re-tweets do not bring any additional information to our study, thus we removed them by default. We also removed any expressions considered to be semantically meaningless like URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags (denoted by the # symbol) to simplify later post-processing. In addition, as a last step of textual pre-processing, we downcased and stripped the punctuation from the text of every tweet."
],
"extractive_spans": [
"re-tweets do not bring any additional information to our study, thus we removed them",
" removed any expressions considered to be semantically meaningless like URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags",
"downcased and stripped the punctuation"
],
"free_form_answer": "",
"highlighted_evidence": [
"To obtain meaningful linguistic data we pre-processed the incoming tweet streams in several ways. As our central question here deals with language semantics of individuals, re-tweets do not bring any additional information to our study, thus we removed them by default. We also removed any expressions considered to be semantically meaningless like URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags (denoted by the # symbol) to simplify later post-processing. In addition, as a last step of textual pre-processing, we downcased and stripped the punctuation from the text of every tweet."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To obtain meaningful linguistic data we pre-processed the incoming tweet streams in several ways. As our central question here deals with language semantics of individuals, re-tweets do not bring any additional information to our study, thus we removed them by default. We also removed any expressions considered to be semantically meaningless like URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags (denoted by the # symbol) to simplify later post-processing. In addition, as a last step of textual pre-processing, we downcased and stripped the punctuation from the text of every tweet."
],
"extractive_spans": [],
"free_form_answer": "removing URLs, emoticons, mentions of other users, hashtags; downcasing and stripping punctuations",
"highlighted_evidence": [
"To obtain meaningful linguistic data we pre-processed the incoming tweet streams in several ways.",
"We also removed any expressions considered to be semantically meaningless like URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags (denoted by the # symbol) to simplify later post-processing. In addition, as a last step of textual pre-processing, we downcased and stripped the punctuation from the text of every tweet."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1840c9682260b85bb8f8fdf93999b33dc0772a2c",
"7df4125d364a5e39b638011111797c227047d8ce",
"dabf55507b984cd00f5c40d1910f52ab54b2e55f"
],
"answer": [
{
"evidence": [
"In order to assess the degree to which linguistic features can be used for discriminating users by their socioeconomic class, we trained with these feature sets different learning algorithms. Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. Training a decision tree learning algorithm involves the generation of a series of rules, split points or nodes ordered in a tree-like structure enabling the prediction of a target output value based on the values of the input features. More specifically, XGBoost, as an ensemble technique, is trained by sequentially adding a high number of individually weak but complementary classifiers to produce a robust estimator: each new model is built to be maximally correlated with the negative gradient of the loss function associated with the model assembly BIBREF44 . To evaluate the performance of this method we benchmarked it against more standard ensemble learning algorithms such as AdaBoost and Random Forest."
],
"extractive_spans": [
"XGBoost"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to assess the degree to which linguistic features can be used for discriminating users by their socioeconomic class, we trained with these feature sets different learning algorithms. Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to assess the degree to which linguistic features can be used for discriminating users by their socioeconomic class, we trained with these feature sets different learning algorithms. Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. Training a decision tree learning algorithm involves the generation of a series of rules, split points or nodes ordered in a tree-like structure enabling the prediction of a target output value based on the values of the input features. More specifically, XGBoost, as an ensemble technique, is trained by sequentially adding a high number of individually weak but complementary classifiers to produce a robust estimator: each new model is built to be maximally correlated with the negative gradient of the loss function associated with the model assembly BIBREF44 . To evaluate the performance of this method we benchmarked it against more standard ensemble learning algorithms such as AdaBoost and Random Forest."
],
"extractive_spans": [
"XGBoost algorithm BIBREF43"
],
"free_form_answer": "",
"highlighted_evidence": [
"Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to assess the degree to which linguistic features can be used for discriminating users by their socioeconomic class, we trained with these feature sets different learning algorithms. Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. Training a decision tree learning algorithm involves the generation of a series of rules, split points or nodes ordered in a tree-like structure enabling the prediction of a target output value based on the values of the input features. More specifically, XGBoost, as an ensemble technique, is trained by sequentially adding a high number of individually weak but complementary classifiers to produce a robust estimator: each new model is built to be maximally correlated with the negative gradient of the loss function associated with the model assembly BIBREF44 . To evaluate the performance of this method we benchmarked it against more standard ensemble learning algorithms such as AdaBoost and Random Forest."
],
"extractive_spans": [],
"free_form_answer": "XGBoost, an ensemble of gradient-based decision trees algorithm ",
"highlighted_evidence": [
"Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. ",
"More specifically, XGBoost, as an ensemble technique, is trained by sequentially adding a high number of individually weak but complementary classifiers to produce a robust estimator: each new model is built to be maximally correlated with the negative gradient of the loss function associated with the model assembly BIBREF44 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"09650884b5ec6f2a706335e68b9ad9f77a62af13",
"4027dfb5bcb154930d3a4817b7338b8f94536f2c",
"7b1a2b8ce661752350d28edb4757620b2c277ecc"
],
"answer": [
{
"evidence": [
"Our central dataset was collected from Twitter, an online news and social networking service. Through Twitter, users can post and interact by “tweeting\" messages with restricted length. Tweets may come with several types of metadata including information about the author's profile, the detected language as well as where and when the tweet was posted. Specifically, we recorded 90,369,215 tweets written in French, posted by 1.3 Million users in the timezones GMT and GMT+1 over one year (between August 2014 to July 2015) BIBREF23 . These tweets were obtained via the Twitter Powertrack API provided by Datasift with an access rate of INLINEFORM0 . Using this dataset we built several other corpora:"
],
"extractive_spans": [
"90,369,215 tweets written in French, posted by 1.3 Million users"
],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, we recorded 90,369,215 tweets written in French, posted by 1.3 Million users in the timezones GMT and GMT+1 over one year (between August 2014 to July 2015) BIBREF23 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: TABLE I NUMBER OF USERS AND ESTIMATED FRACTIONS OF LOW AND HIGH SES IN EACH DATASET"
],
"extractive_spans": [],
"free_form_answer": "They created 3 datasets with combined size of 37193.",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE I NUMBER OF USERS AND ESTIMATED FRACTIONS OF LOW AND HIGH SES IN EACH DATASET"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our central dataset was collected from Twitter, an online news and social networking service. Through Twitter, users can post and interact by “tweeting\" messages with restricted length. Tweets may come with several types of metadata including information about the author's profile, the detected language as well as where and when the tweet was posted. Specifically, we recorded 90,369,215 tweets written in French, posted by 1.3 Million users in the timezones GMT and GMT+1 over one year (between August 2014 to July 2015) BIBREF23 . These tweets were obtained via the Twitter Powertrack API provided by Datasift with an access rate of INLINEFORM0 . Using this dataset we built several other corpora:"
],
"extractive_spans": [],
"free_form_answer": "90,369,215 tweets",
"highlighted_evidence": [
"Specifically, we recorded 90,369,215 tweets written in French, posted by 1.3 Million users in the timezones GMT and GMT+1 over one year (between August 2014 to July 2015) BIBREF23 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"054b1aa400f91ef18f021015c76bcb551eb2b6ac",
"37810ce816a45648bb0647d1f1f9ef3fc9756da2",
"60a0f20bd88f0de4495415c090133594792affd2"
],
"answer": [
{
"evidence": [
"To demonstrate how discriminative the identified topics were in terms of the SES of users we associated to each user the 9th decile value of the income distribution corresponding to the census block of their home location and computed for each labelled topic the average income of users depending on whether or not they mentioned the given topic. Results in Fig. FIGREF19 demonstrates that topics related to politics, technology or culture are more discussed by people with higher income, while other topics associated to slang, insults or informal abbreviations are more used by people of lower income. These observable differences between the average income of people, who use (or not) words from discriminative topics, demonstrates well the potential of word topic clustering used as features for the inference of SES. All in all, each user in our dataset was assigned with a 1117 feature vector encoding the lexical and semantic profile she displayed on Twitter. We did not apply any further feature selection as the distribution of importance of features appeared rather smooth (not shown here). It did not provided evident ways to identify a clear set of particularly determinant features, but rather indicated that the combination of them were important."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"All in all, each user in our dataset was assigned with a 1117 feature vector encoding the lexical and semantic profile she displayed on Twitter. We did not apply any further feature selection as the distribution of importance of features appeared rather smooth (not shown here). It did not provided evident ways to identify a clear set of particularly determinant features, but rather indicated that the combination of them were important."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Online social networks have become one of the most disruptive communication platforms, as everyday billions of individuals use them to interact with each other. Their penetration in our everyday lives seems ever-growing and has in turn generated a massive volume of publicly available data open to analysis. The digital footprints left across these multiple media platforms provide us with a unique source to study and understand how the linguistic phenotype of a given user is related to social attributes such as socioeconomic status (SES).",
"INLINEFORM0 Occupation data: LinkedIn as a professional online social network is predominantly used by people from IT, business, management, marketing or other expert areas, typically associated with higher education levels and higher salaries. Moreover, we could observe only users who shared their professional profiles on Twitter, which may further biased our training set. In terms of occupational-salary classification, the data in BIBREF32 was collected in 2010 thus may not contain more recent professions. These biases may induce limits in the representativeness of our training data and thus in the predictions' precision. However, results based on this method of SES annotation performed best in our measurements, indicating that professions are among the most predictive features of SES, as has been reported in BIBREF9 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The digital footprints left across these multiple media platforms provide us with a unique source to study and understand how the linguistic phenotype of a given user is related to social attributes such as socioeconomic status (SES).",
"However, results based on this method of SES annotation performed best in our measurements, indicating that professions are among the most predictive features of SES, as has been reported in BIBREF9 ."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We provide in Section SECREF2 an overview of the related literature to contextualize the novelty of our work. In Section SECREF3 we provide a detailed description of the data collection and combination methods. In Section SECREF4 we introduce the features extracted to solve the SES inference problem, with results summarized in Section SECREF5 . Finally, in Section SECREF6 and SECREF7 we conclude our paper with a brief discussion of the limitations and perspectives of our methods."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" In Section SECREF4 we introduce the features extracted to solve the SES inference problem, with results summarized in Section SECREF5 ."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"bdd78f8c9ba033bdaa9e3ef01de184cdd8998c19"
],
"answer": [
{
"evidence": [
"In order to assess the degree to which linguistic features can be used for discriminating users by their socioeconomic class, we trained with these feature sets different learning algorithms. Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. Training a decision tree learning algorithm involves the generation of a series of rules, split points or nodes ordered in a tree-like structure enabling the prediction of a target output value based on the values of the input features. More specifically, XGBoost, as an ensemble technique, is trained by sequentially adding a high number of individually weak but complementary classifiers to produce a robust estimator: each new model is built to be maximally correlated with the negative gradient of the loss function associated with the model assembly BIBREF44 . To evaluate the performance of this method we benchmarked it against more standard ensemble learning algorithms such as AdaBoost and Random Forest."
],
"extractive_spans": [
"XGBoost",
"AdaBoost",
"Random Forest"
],
"free_form_answer": "",
"highlighted_evidence": [
"Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. ",
"To evaluate the performance of this method we benchmarked it against more standard ensemble learning algorithms such as AdaBoost and Random Forest."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7ef438e76c2038f174911311a9e1606df9553ef1"
],
"answer": [
{
"evidence": [
"In order to assess the degree to which linguistic features can be used for discriminating users by their socioeconomic class, we trained with these feature sets different learning algorithms. Namely, we used the XGBoost algorithm BIBREF43 , an implementation of the gradient-boosted decision trees for this task. Training a decision tree learning algorithm involves the generation of a series of rules, split points or nodes ordered in a tree-like structure enabling the prediction of a target output value based on the values of the input features. More specifically, XGBoost, as an ensemble technique, is trained by sequentially adding a high number of individually weak but complementary classifiers to produce a robust estimator: each new model is built to be maximally correlated with the negative gradient of the loss function associated with the model assembly BIBREF44 . To evaluate the performance of this method we benchmarked it against more standard ensemble learning algorithms such as AdaBoost and Random Forest."
],
"extractive_spans": [
"AdaBoost",
"Random Forest"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the performance of this method we benchmarked it against more standard ensemble learning algorithms such as AdaBoost and Random Forest."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"dbf389dc0d1b6495d818c1d167ce91eadb67bccb"
],
"answer": [
{
"evidence": [
"Next we aimed to estimate SES from architectural/urban features associated to the home locations. Thus, for each home location we collected two additional satellite views at different resolutions as well as six Street View images, each with a horizontal view of approximately INLINEFORM0 . We randomly selected a sample of INLINEFORM1 locations and involved architects to assign a SES score (from 1 to 9) to a sample set of selected locations based on the satellite and Street View around it (both samples had 333 overlapping locations). For validation, we took users from each annotated SES class and computed the distribution of their incomes inferred from the IRIS census data (see Section SECREF6 ). Violin plots in Fig. FIGREF12 d show that in expert annotated data, as expected, the inferred income values were positively correlated with the annotated SES classes. Labels were then categorized into two socioeconomic classes for comparison purposes. All in all, both annotators assigned the same label to the overlapping locations in INLINEFORM2 of samples."
],
"extractive_spans": [],
"free_form_answer": "The SES score was assigned by architects based on the satellite and Street View images of users' homes.",
"highlighted_evidence": [
"We randomly selected a sample of INLINEFORM1 locations and involved architects to assign a SES score (from 1 to 9) to a sample set of selected locations based on the satellite and Street View around it (both samples had 333 overlapping locations)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"8f3a52f80ee6b95f6c78405e56a3e7fb943953f3"
],
"answer": [
{
"evidence": [
"As a second method to estimate SES, we took a sample of Twitter users who mentioned their LinkedIn BIBREF29 profile url in their tweets or Twitter profile. Using these pointers we collected professional profile descriptions from LinkedIn by relying on an automatic crawler mainly used in Search Engine Optimization (SEO) tasks BIBREF30 . We obtained INLINEFORM0 Twitter/LinkedIn users all associated with their job title, professional skills and profile description. Apart from the advantage of working with structured data, professional information extracted from LinkedIn is significantly more reliable than Twitter's due to the high degree of social scrutiny to which each profile is exposed BIBREF31 ."
],
"extractive_spans": [
"LinkedIn"
],
"free_form_answer": "",
"highlighted_evidence": [
"Using these pointers we collected professional profile descriptions from LinkedIn by relying on an automatic crawler mainly used in Search Engine Optimization (SEO) tasks BIBREF30 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How do they preprocess Tweets?",
"What kind of inference model do they build to estimate socioeconomic status?",
"How much data do they gather in total?",
"Do they analyze features which help indicate socioeconomic status?",
"What inference models are used?",
"What baseline model is used?",
"How is the remotely sensed data annotated?",
"Where are the professional profiles crawled from?"
],
"question_id": [
"9baca9bdb8e7d5a750f8cbe3282beb371347c164",
"2cb20bae085b67e357ab1e18ebafeac4bbde5b4a",
"892ee7c2765b3764312c3c2b6f4538322efbed4e",
"c68946ae2e548ec8517c7902585c032b3f3876e6",
"7557f2c3424ae70e2a79c51f9752adc99a9bdd39",
"b03249984c26baffb67e7736458b320148675900",
"9595fdf7b51251679cd39bc4f6befc81f09c853c",
"08c0d4db14773cbed8a63e69381a2265e85f8765"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. (a) Average distance from home of active users per hour of the day. (b) Hourly rate of all geolocated tweets and (c) geolocated tweets mentioning ‘dormir’ averaged over all weekdays.",
"Fig. 2. IRIS area cells in central Paris colored according to the median income of inhabitants, with inferred home locations of 2, 000 Twitter users.",
"Fig. 3. Top: ResNet50 Output: (a): Original satellite view; (b): First two hidden layers activation; (c): Final top-3 most frequent predicted area types; (d) Architect SES score agreement with census median income for the sampled home locations. It is shown as violin plots of income distributions for users annotated in different classes (shown on x-axis and by color).",
"TABLE I NUMBER OF USERS AND ESTIMATED FRACTIONS OF LOW AND HIGH SES IN EACH DATASET",
"Fig. 5. Clustered topic-to-topic correlation matrix: Topics are generated via the spectral clustering of the word2vec word co-similarity matrix. Row labels are the name of topics while column labels are their categories. Blue cells (resp. red) assign negative (resp. positive) Pearsons correlation coefficients.",
"Fig. 4. Cumulative distributions of income as a function of sorted fraction f of individuals. Dashed line corresponds to the perfectly balanced distribution. Distributions appear similar in spite of dealing with heterogeneous samples.",
"TABLE II CLASSIFICATION PERFORMANCE (5-CV): AUC SCORES (MEAN ± STD) OF THREE DIFFERENT CLASSIFIERS ON EACH DATASET)",
"Fig. 6. Average income for users who tweeted about a given topic (blue) vs. those who didn’t (red). Label of the considered topic is on the left.",
"Fig. 7. ROC curves for 2-way SES prediction using tuned XGBoost in each of the 3 SES datasets. AUC values are reported in the legend. The dashed line corresponds to the line of no discrimination. Solid lines assign average values over all folds while shaded regions represent standard deviation.",
"TABLE III DETAILED AVERAGE PERFORMANCE (5-CV) ON TEST DATA FOR THE BINARY SES INFERENCE PROBLEM FOR EACH OF THE 3 DATASETS"
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-TableI-1.png",
"5-Figure5-1.png",
"5-Figure4-1.png",
"6-TableII-1.png",
"6-Figure6-1.png",
"7-Figure7-1.png",
"7-TableIII-1.png"
]
} | [
"How do they preprocess Tweets?",
"What kind of inference model do they build to estimate socioeconomic status?",
"How much data do they gather in total?",
"How is the remotely sensed data annotated?"
] | [
[
"1901.05389-Twitter corpus-3"
],
[
"1901.05389-Results-0"
],
[
"1901.05389-Twitter corpus-0",
"1901.05389-5-TableI-1.png"
],
[
"1901.05389-Expert annotated home location data-3"
]
] | [
"removing URLs, emoticons, mentions of other users, hashtags; downcasing and stripping punctuations",
"XGBoost, an ensemble of gradient-based decision trees algorithm ",
"90,369,215 tweets",
"The SES score was assigned by architects based on the satellite and Street View images of users' homes."
] | 243 |
1808.10290 | Acquiring Annotated Data with Cross-lingual Explicitation for Implicit Discourse Relation Classification | Implicit discourse relation classification is one of the most challenging and important tasks in discourse parsing, due to the lack of connective as strong linguistic cues. A principle bottleneck to further improvement is the shortage of training data (ca.~16k instances in the PDTB). Shi et al. (2017) proposed to acquire additional data by exploiting connectives in translation: human translators mark discourse relations which are implicit in the source language explicitly in the translation. Using back-translations of such explicitated connectives improves discourse relation parsing performance. This paper addresses the open question of whether the choice of the translation language matters, and whether multiple translations into different languages can be effectively used to improve the quality of the additional data. | {
"paragraphs": [
[
"Discourse relations connect two sentences/clauses to each other. The identification of discourse relations is an important step in natural language understanding and is beneficial to various downstream NLP applications such as text summarization BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , machine translation BIBREF5 , BIBREF6 , and so on.",
"Discourse relations can be marked explicitly using a discourse connective or discourse adverbial such as “because”, “but”, “however”, see example SECREF1 . Explicitly marked relations are relatively easy to classify automatically BIBREF7 . In example SECREF2 , the causal relation is not marked explicitly, and can only be inferred from the texts. This second type of case is empirically even more common than explicitly marked relations BIBREF8 , but is much harder to classify automatically.",
"The difficulty in classifying implicit discourse relations stems from the lack of strong indicative cues. Early work has already shown that implicit relations cannot be learned from explicit ones BIBREF9 , making human-annotated relations the currently only source for training relation classification.",
"Due to the limited size of available training data, several approaches have been proposed for acquiring additional training data using automatic methods BIBREF10 , BIBREF11 . The most promising approach so far, BIBREF0 , exploits the fact that human translators sometimes insert a connective in their translation even when a relation was implicit in the original text. Using a back-translation method, BIBREF0 showed that such instances can be used for acquiring additional labeled text.",
" BIBREF0 however only used a single target langauge (French), and had no control over the quality of the labels extracted from back-translated connectives. In this paper, we therefore systematically compare the contribution of three target translation languages from different language families: French (a Romance language), German (from the Germanic language family) and Czech (a Slavic language). As all three of these languages are part of the EuroParl corpus, this also allows us to directly test whether higher quality can be achieved by using those instances that were consistently explicitated in several languages."
],
[
"Recent methods for discourse relation classification have increasingly relied on neural network architectures. However, with the high number of parameters to be trained in more and more complicated deep neural network architectures, the demand of more reliable annotated data has become even more urgent. Data extension has been a longstanding goal in implicit discourse classification. BIBREF10 proposed to differentiate typical and atypical examples for each relation and augment training data for implicit only by typical explicits. BIBREF11 designed criteria for selecting explicit samples in which connectives can be omitted without changing the interpretation of the discourse. More recently, BIBREF0 proposed a pipeline to automatically label English implicit discourse samples based on explicitation of discourse connectives during human translating in parallel corpora, and achieve substantial improvements in classification. Our work here directly extended theirs by employing document-aligned cross-lingual parallel corpora and majority votes to get more reliable and in-topic annotated implicit discourse relation instances."
],
[
"Our goal here aims at sentence pairs in cross-lingual corpora where connectives have been inserted by human translators during translating from English to several other languages. After back-translating from other languages to English, explicit relations can be easily identified by discourse parser and then original English sentences would be labeled accordingly.",
"We follow the pipeline proposed in BIBREF0 , as illustrated in Figure FIGREF3 , with the following differences: First, we filter and re-paragraph the line-aligned corpus to parallel document-aligned files, which makes it possible to obtain in-topic inter-sentential instances. After preprocessing, we got 532,542 parallel sentence pairs in 6,105 documents. Secondly, we use a statistical machine translation system instead of a neural one for more stable translations."
],
[
"We train three MT systems to back-translate French, German and Czech to English. To have words alignments, better and stable back-translations, we employ a statistical machine translation system Moses BIBREF12 , trained on the same parallel corpora. Source and target sentences are first tokenized, true-cased and then fed into the system for training. In our case, the translation target texts are identical with the training set of the translation systems; this would not be a problem because our only objective in the translation is to back-translate connectives in the translation into English. On the training set, the translation system achieves BLEU scores of 66.20 (French), 65.30 (German) and 69.05 (Czech)."
],
[
"After parsing the back-translations of French, German and Czech, we can compare whether they contain explicit relations which connect the same relational arguments. The analysis of this subset then allows us to identify those instances which could be labeled with high confidence."
],
[
"Europarl Corpora The parallel corpora used here are from Europarl BIBREF13 , it contains about 2.05M English-French, 1.96M English-German and 0.65M English-Czech pairs. After preprocessing, we got about 0.53M parallel sentence pairs in all these four languages.",
"The Penn Discourse Treebank (PDTB) It is the largest manually annotated corpus of discourse relations from Wall Street Journal. Each discourse relation has been annotated in three hierarchy levels. In this paper, we follow the previous conventional settings and focus on the second-level 11-ways classification."
],
[
"To evaluate whether the extracted data is helpful to this task, we use a simple and effective bidirectional Long Short-Term Memory (LSTM) network. After being mapped to vectors, words are fed into the network sequentially. Hidden states of LSTM cell from different directions are averaged. The representations of two arguments from two separate bi-LSTMs are concatenated before being inputed into a softmax layer for prediction.",
"Implementation: The model is implemented in Pytorch. All the parameters are initialized with uniform random. We employ cross-entropy as our cost function, Adagrad with learning rate of 0.01 as the optimization algorithm and set the dropout layers after embedding and ourput layer with drop rates of 0.5 and 0.2 respectively. The word vectors are pre-trained word embedding from Word2Vec.",
"Settings: We follow the previous works and evaluate our data on second-level 11-ways classification on PDTB with 3 settings: BIBREF14 (denotes as PDTB-Lin) uses sections 2-21, 22 and 23 as train, dev and test set; BIBREF15 uses sections 2-20, 0-1 and 21-22 as train, dev and test set; Moreover, we also use 10-folds cross validation among sections 0-23 BIBREF16 . For each experiment, the additional data is only added into the training set."
],
[
"Figure FIGREF11 shows the distributions of expert-annotated PDTB implicit relations and the implicit discourse examples extracted from the French, German and Czech back-translations. Overall, there is no strong bias – all relations seem to be represented similarly well, in line with their general frequency of occurrence. The only exceptions are Expansion.Conjunction relations from the German translations, which are over-represented, and Expansion.Restatement relations which are under-represented based on our back-translation method.",
"Figure FIGREF14 shows that the filtering by majority votes (including only two cases where at least two back-translations agree with one another vs. where all three agree) does again not change the distribution of extracted relations.",
"Table TABREF7 shows that best results are achieved by adding only those samples for which two back-translations agree with one another. This may represent the best trade-off between reliability of the label and the amount of additional data. The setting where the data from all languages is added performs badly despite the large number of samples, because this method contains different labels for the same argument pairs, for all those instances where the back-translations don't yield the same label, introducing noise into the system. The size of the extra data used in BIBREF0 is about 10 times larger than our 2-votes data, as they relied on additional training data (which we could not use in this experiment, as there is no pairing with translations into other languages) and exploited also intra-sentential instances. While we don't match the performance of BIBREF0 on the PDTB-Lin test set, the high quality translation data shows better generalisability by outperforming all other settings in the cross-validation (which is based on 16 test instances, while the PDTB-Lin test set contains less than 800 instances and hence exhibits more variability in general).",
"Finally, we want to provide insight into what kind of instances the system extracts, and why back-translation labels sometimes disagree. We have identified four major cases based on a manual analysis of 100 randomly sampled instances.",
"Case 1: Sometimes, back-translations from several languages may yield the same connective because the original English sentence actually was not really unmarked, but rather contained an expression which could not be automatically recognized as a discourse relation marker by the automatic discourse parser:",
"",
"Original English: I presided over a region crossed by heavy traffic from all over Europe...what is more, in 2002, two Member States of the European Union appealed to the European Court of Justice...",
"French: moreover (Expansion.Conjunction)",
"German: moreover (Expansion.Conjunction)",
"Czech: therefore (Contingency.Cause) after all",
" The expression what is more is not part of the set of connectives labeled in PDTB and hence was not identified by the discourse parser. Our method is successful because such cues can be automatically identified from the consistent back-translations into two languages. (The case in Czech is more complex because the back-translation contains two signals, therefore and after all, see case 4.)",
"Case 2: Majority votes help to reduce noise related to errors introduced by the automatic pipeline, such as argument or connective misidentification: in the below example, also in the French translation is actually the translation of along with.",
" Original English: ...the public should be able to benefit in two ways from the potential for greater road safety. For this reason, along with the report we are discussing today, I call for more research into ...the safety benefits of driver-assistance systems.",
"French: also (Expansion.Conjunction)",
"German: therefore (Contingency.Cause)",
"Czech: therefore (Contingency.Cause)",
"",
"Case 3: Discrepancies between connectives in back-translation can also be due to differences in how translators interpreted the original text:",
"",
"Original English: ...we are dealing in this case with the domestic legal system of the Member States. That being said, I cannot answer for the Council of Europe or for the European Court of Human Rights...",
"French: however (Comparison.Contrast)",
"German: therefore (Contingency.Cause)",
"Czech: in addition (Expansion.Conjunction)",
"",
"Case 4: Implicit relations can co-occur with marked discourse relations BIBREF17 , and multiple translations help discover these instances, for example:",
"",
"Original English: We all understand that nobody can return Russia to the path of freedom and democracy... (implicit: but) what is more, the situation in our country is not as straightforward as it might appear...",
"French: but (Comparison.Contrast) there is more",
"",
""
],
[
"We compare the explicitations obtained from translations into three different languages, and find that instances where at least two back-translations agree yield the best quality, significantly outperforming a version of the model that does not use additional data, or uses data from just one language. A qualitative analysis furthermore shows that the strength of the method partially stems from being able to learn additional discourse cues which are typically translated consistently, and suggests that our method may also be used for identifying multiple relations holding between two arguments."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Machine Translation",
"Majority Vote",
"Data",
"Implicit discourse relation classification",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"5c4bc505b1b4f30b8afc72d88c7eda2d4545654b",
"fe8d6418c0d9a3e4a16d69042decaa66623d8c62"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: The pipeline of proposed method. “SMT” and “DRP” denote statistical machine translation and discourse relation parser respectively."
],
"extractive_spans": [],
"free_form_answer": "45680",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: The pipeline of proposed method. “SMT” and “DRP” denote statistical machine translation and discourse relation parser respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF7 shows that best results are achieved by adding only those samples for which two back-translations agree with one another. This may represent the best trade-off between reliability of the label and the amount of additional data. The setting where the data from all languages is added performs badly despite the large number of samples, because this method contains different labels for the same argument pairs, for all those instances where the back-translations don't yield the same label, introducing noise into the system. The size of the extra data used in BIBREF0 is about 10 times larger than our 2-votes data, as they relied on additional training data (which we could not use in this experiment, as there is no pairing with translations into other languages) and exploited also intra-sentential instances. While we don't match the performance of BIBREF0 on the PDTB-Lin test set, the high quality translation data shows better generalisability by outperforming all other settings in the cross-validation (which is based on 16 test instances, while the PDTB-Lin test set contains less than 800 instances and hence exhibits more variability in general).",
"FLOAT SELECTED: Table 1: Performances with different sets of additional data. Average accuracy of 10 runs (5 for cross validations) are shown here with standard deviation in the brackets. Numbers in bold are significantly (p<0.05) better than the PDTB only baseline with unpaired t-test."
],
"extractive_spans": [],
"free_form_answer": "In case of 2-votes they used 9,298 samples and in case of 3-votes they used 1,298 samples. ",
"highlighted_evidence": [
"Table TABREF7 shows that best results are achieved by adding only those samples for which two back-translations agree with one another. This may represent the best trade-off between reliability of the label and the amount of additional data. The setting where the data from all languages is added performs badly despite the large number of samples, because this method contains different labels for the same argument pairs, for all those instances where the back-translations don't yield the same label, introducing noise into the system. The size of the extra data used in BIBREF0 is about 10 times larger than our 2-votes data, as they relied on additional training data (which we could not use in this experiment, as there is no pairing with translations into other languages) and exploited also intra-sentential instances. ",
"FLOAT SELECTED: Table 1: Performances with different sets of additional data. Average accuracy of 10 runs (5 for cross validations) are shown here with standard deviation in the brackets. Numbers in bold are significantly (p<0.05) better than the PDTB only baseline with unpaired t-test."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"57bcba7fcece6d568489484f8ef250d3b64f2764",
"71a3976b4feb7a542582f46418f5f5b1a95c43d1"
],
"answer": [
{
"evidence": [
"Table TABREF7 shows that best results are achieved by adding only those samples for which two back-translations agree with one another. This may represent the best trade-off between reliability of the label and the amount of additional data. The setting where the data from all languages is added performs badly despite the large number of samples, because this method contains different labels for the same argument pairs, for all those instances where the back-translations don't yield the same label, introducing noise into the system. The size of the extra data used in BIBREF0 is about 10 times larger than our 2-votes data, as they relied on additional training data (which we could not use in this experiment, as there is no pairing with translations into other languages) and exploited also intra-sentential instances. While we don't match the performance of BIBREF0 on the PDTB-Lin test set, the high quality translation data shows better generalisability by outperforming all other settings in the cross-validation (which is based on 16 test instances, while the PDTB-Lin test set contains less than 800 instances and hence exhibits more variability in general)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The size of the extra data used in BIBREF0 is about 10 times larger than our 2-votes data, as they relied on additional training data (which we could not use in this experiment, as there is no pairing with translations into other languages) and exploited also intra-sentential instances."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Settings: We follow the previous works and evaluate our data on second-level 11-ways classification on PDTB with 3 settings: BIBREF14 (denotes as PDTB-Lin) uses sections 2-21, 22 and 23 as train, dev and test set; BIBREF15 uses sections 2-20, 0-1 and 21-22 as train, dev and test set; Moreover, we also use 10-folds cross validation among sections 0-23 BIBREF16 . For each experiment, the additional data is only added into the training set."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Settings: We follow the previous works and evaluate our data on second-level 11-ways classification on PDTB with 3 settings: BIBREF14 (denotes as PDTB-Lin) uses sections 2-21, 22 and 23 as train, dev and test set; BIBREF15 uses sections 2-20, 0-1 and 21-22 as train, dev and test set; Moreover, we also use 10-folds cross validation among sections 0-23 BIBREF16 . For each experiment, the additional data is only added into the training set."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"054f215da89285f2df39d31230c9ce4ca0533ceb",
"4b70f094f017739e02de44f57e2c3dd6c3061392"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 2: Numbers of implicit discourse relation instances from different agreements of explicit instances in three back-translations. En-Fr denotes instances that are implicit in English but explicit in back-translation of French, same for En-De and En-Cz. The overlap means they share the same relational arguments. The numbers under “Two-Votes” and “Three-Votes” are the numbers of discourse relation agreement / disagreement between explicits in back-translations of two or three languages."
],
"extractive_spans": [],
"free_form_answer": "4",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: Numbers of implicit discourse relation instances from different agreements of explicit instances in three back-translations. En-Fr denotes instances that are implicit in English but explicit in back-translation of French, same for En-De and En-Cz. The overlap means they share the same relational arguments. The numbers under “Two-Votes” and “Three-Votes” are the numbers of discourse relation agreement / disagreement between explicits in back-translations of two or three languages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"BIBREF0 however only used a single target langauge (French), and had no control over the quality of the labels extracted from back-translated connectives. In this paper, we therefore systematically compare the contribution of three target translation languages from different language families: French (a Romance language), German (from the Germanic language family) and Czech (a Slavic language). As all three of these languages are part of the EuroParl corpus, this also allows us to directly test whether higher quality can be achieved by using those instances that were consistently explicitated in several languages.",
"Europarl Corpora The parallel corpora used here are from Europarl BIBREF13 , it contains about 2.05M English-French, 1.96M English-German and 0.65M English-Czech pairs. After preprocessing, we got about 0.53M parallel sentence pairs in all these four languages."
],
"extractive_spans": [
"four languages"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we therefore systematically compare the contribution of three target translation languages from different language families: French (a Romance language), German (from the Germanic language family) and Czech (a Slavic language). As all three of these languages are part of the EuroParl corpus, this also allows us to directly test whether higher quality can be achieved by using those instances that were consistently explicitated in several languages.",
"Europarl Corpora The parallel corpora used here are from Europarl BIBREF13 , it contains about 2.05M English-French, 1.96M English-German and 0.65M English-Czech pairs. After preprocessing, we got about 0.53M parallel sentence pairs in all these four languages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How much additional data do they manage to generate from translations?",
"Do they train discourse relation models with augmented data?",
"How many languages do they at most attempt to use to generate discourse relation labelled data?"
],
"question_id": [
"5e29f16d7302f24ab93b7707d115f4265a0d14b0",
"26844cec57df6ff0f02245ea862af316b89edffe",
"d1d59bca40b8b308c0a35fed1b4b7826c85bc9f8"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The pipeline of proposed method. “SMT” and “DRP” denote statistical machine translation and discourse relation parser respectively.",
"Figure 2: Numbers of implicit discourse relation instances from different agreements of explicit instances in three back-translations. En-Fr denotes instances that are implicit in English but explicit in back-translation of French, same for En-De and En-Cz. The overlap means they share the same relational arguments. The numbers under “Two-Votes” and “Three-Votes” are the numbers of discourse relation agreement / disagreement between explicits in back-translations of two or three languages.",
"Figure 3: Bi-LSTM network for implicit discoure relation classification.",
"Figure 4: Distributions of PDTB and the extracted data among each discourse relation.",
"Figure 5: Distributions of discourse relations with different agreements.",
"Table 1: Performances with different sets of additional data. Average accuracy of 10 runs (5 for cross validations) are shown here with standard deviation in the brackets. Numbers in bold are significantly (p<0.05) better than the PDTB only baseline with unpaired t-test."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png",
"6-Table1-1.png"
]
} | [
"How much additional data do they manage to generate from translations?",
"How many languages do they at most attempt to use to generate discourse relation labelled data?"
] | [
[
"1808.10290-Results-2",
"1808.10290-6-Table1-1.png",
"1808.10290-3-Figure1-1.png"
],
[
"1808.10290-4-Figure2-1.png",
"1808.10290-Data-0"
]
] | [
"In case of 2-votes they used 9,298 samples and in case of 3-votes they used 1,298 samples. ",
"4"
] | 244 |
1612.04118 | Information Extraction with Character-level Neural Networks and Free Noisy Supervision | We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn complex features. Boosting the existing parser's precision, the system led to large improvements over a mature and highly tuned constraint-based production information extraction system used at Bloomberg for financial language text. | {
"paragraphs": [
[
"Unstructured textual data is abundant in the financial domain (see e.g. Figure FIGREF2 ). This information is by definition not in a format that lends itself to immediate processing. Hence, information extraction is an essential step in business applications that require fast, accurate, and low-cost information processing. In the financial domain, these applications include the creation of time series databases for macroeconomic forecasting or financial analysis, as well as the real-time extraction of time series data to inform algorithmic trading strategies. Bloomberg has had information extraction systems for financial language text for nearly a decade.",
"To meet the application domain's high accuracy requirements, marrying constraints with statistical models is often beneficial, see e.g. BIBREF0 , BIBREF1 . Many quantities appearing in information extraction problems are by definition constrained in the numerical values they can assume (e.g. unemployment numbers cannot be negative numbers, while changes in unemployment numbers can be negative). The inclusion of such constraints may significantly boost data efficiency. Constraints can be complex in nature, and may involve multiple entities belonging to an extraction candidate generated by the parser. At Bloomberg, we found the system for information extraction described in this paper especially useful to extract time series (TS) data. As an example, consider numerical relations of the form",
"ts_tick_abs (TS symbol, numerical value),",
"e.g. ts_tick_abs (US_Unemployment, 4.9%), or",
"ts_tick_rel (TS symbol, change in num. value),",
"e.g. ts_tick_abs (US_Unemployment, -0.2%)."
],
[
"We present an information extraction architecture that augments a candidate-generating parser with a deep neural network. The candidate-generating parser may leverage constraints. At the same time, the architecture gains the neural networks's ability to leverage large amounts of data to learn complex features that are tuned for the application at hand. Our method assumes the existence of a potentially noisy source of supervision INLINEFORM0 , e.g. via consistency checks of extracted data against existing databases, or via human interaction. This supervision is used to train the neural network.",
"Our extraction system has three advantages over earlier work on information extraction with deep neural networks BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 :",
"Our system leverages “free” data to train a deep neural network, and does not require large-scale manual annotation. The network is trained with noisy supervision provided by measures of consistency with existing databases (e.g. an extraction ts_tick_abs (US_Unemployment, 49%) would be implausible given recent US employment history). With slight modifications, our pipeline could be trained with supervision from human interaction, such as clicks on online advertisements. Learning without explicit annotations is critical in applications where large-scale manual annotation would be prohibitively expensive.",
"If an extractor for the given application has already been built, the neural network boosts its accuracy without the need to re-engineer or discard the existing solution. Even for new systems, the decoupling of candidate-generation and the neural network offers advantages: the candidate-generating parser can easily enforce contraints that would be difficult to support in an algorithm relying entirely on a neural network. Note that, in particular, a carefully engineered candidate-generating parser enforces constraints intelligently, and can in many instances eliminate the need to evaluate computationally expensive constraints, e.g. API calls.",
"We encode the candidate-generating parser's document annotations character-by-character into vectors INLINEFORM0 that also include a one-hot encoding of the character itself. We believe that this encoding makes it easier for the network to learn character-level characteristics of the entities in the semantic relation. Moreover, our encoding lends itself well to processing both by recurrent architectures (processing character-by-character input vectors INLINEFORM1 ) and convolutional architectures (performing 1D convolutions over an input matrix whose columns are vectors INLINEFORM2 ).",
"In a production setting, the neural architecture presented here reduced the number of false positive extractions in financial information extraction application by INLINEFORM0 relative to a mature system developed over the course of several years."
],
[
"The information extraction pipeline we developed consists of four stages (see right pane of Figure FIGREF12 ).",
"The document is parsed using a potentially constraint-based parser, which outputs a set of candidate extractions. Each candidate extraction consists of the character offsets of all extracted constituent entities, as well as a representation of the extracted relation. It may additionally contain auxilliary information that the parser may have generated, such as part of speech tags.",
"We compute a consistency score INLINEFORM0 for the candidate extraction, measuring if the extracted relation is consistent with (noisy) supervision INLINEFORM1 (e.g. an existing database).",
"Each candidate extraction generated, together with the section of the document it was found in, is encoded into feature data INLINEFORM0 . A deep neural network is used to compute a neural network candidate correctness score INLINEFORM1 for each extraction candidate.",
"A linear classifier classifies extraction candidates as correct and incorrect extractions, based on consistency and correctness scores INLINEFORM0 and INLINEFORM1 and potentially other features. Candidates classified as incorrect are discarded."
],
[
"The neural network processes each input candidate independently. To estimate the correctness of a extracted candidate, the network is provided with two pieces of input (see Figure FIGREF14 for the full structure of the neural network): first, the network is provided with a vector INLINEFORM0 containing global features, such as attributes of the document's author, or word-level n-gram features of the document text. The second piece of input data consists of a sequence of vectors INLINEFORM1 , encoding the document text and the parser's output at a character level. There is one vector INLINEFORM2 for each character INLINEFORM3 of the document section where the extraction candidate was found.",
"The vectors INLINEFORM0 are a concatenation of (i) a one-hot encoding of the character and (ii) information about entities the parser identified at the position of INLINEFORM1 . For (i) we use a restricted character set of size 94, including [a-zA-Z0-9] and several whitespace and special characters, plus an indicator to represent characters not present in our restricted character set. For (ii), INLINEFORM2 contains data representing the parser's output. For our application, we include in INLINEFORM3 a vector of indicators specifying whether or not any of the entities appearing in the relations supported by the parser were found in the position of character INLINEFORM4 ."
],
[
"We propose to train the neural network by referencing candidates extracted by a high-recall candidate-generating parser against a potentially noisy reference source (see Figure FIGREF12 , left panel). In our application, this reference was a database containing historical time series data, which enabled us to check how well the extracted numerical data fit into time series in the database. Concretely, we compute a consistency score INLINEFORM0 that measures the degree of consistency with the database. Depending on the application, the score may for instance be a squared relative error, an absolute error, or a more complex error function. In many applications, the score INLINEFORM1 will be noisy (see below for further discussion). We threshold INLINEFORM2 to obtain binary correctness labels INLINEFORM3 . We then use the binary correctness labels INLINEFORM4 for supervised neural network training, with binary cross-entropy loss as the loss function. This allows us to train a network that can compute a pseudo-likelihood INLINEFORM5 of a given extraction candidate to agree with the database. Thus, INLINEFORM6 estimates how likely the extraction candidate is correct.",
"We assume that the noise in the source of supervision INLINEFORM0 is limited in magnitude, e.g. INLINEFORM1 . We moreover assume that there are no strong patterns in the distribution of the noise: if the noise correlates with certain attributes of the candidate-extraction, the pseudo-likelihoods INLINEFORM2 might no longer be a good estimate of the candidate extraction's probability of being a correct extraction.",
"There are two sources of noise in our application's database supervision. First, there is a high rate of false positives. It is not rare for the parser to generate an extraction candidate ts_tick_abs (TS symbol, numerical value) in which the numerical value fits into the time series of the time series symbol, but the extraction is nonetheless incorrect. False negatives are also a problem: many financial time series are sparse and are rarely observed. As a result, it is common for differences between reference numerical values and extracted numerical values to be large even for correct extractions.",
"The neural network's training data consists of candidates generated by the candidate-generating parser, and noisy binary consistency labels INLINEFORM0 ."
],
[
"The full pipeline, deployed in a production setting, resulted in a reduction in false positives of more than INLINEFORM0 in the extractions produced by our pipeline. The drop in recall relative to the production system was smaller than INLINEFORM1 .",
"We found that even with only 256 hidden LSTM cells, the neural network described in the previous section significantly outperformed a 2-layer fully connected network with n-grams based on document text and parser annotations as input."
],
[
"We presented an architecture for information extraction from text using a combination of an existing parser and a deep neural network. The architecture can boost the precision of a high-recall information extraction system. To train the neural network, we use measures of consistency between extracted data and existing databases as a form of noisy supervision. The architecture resulted in substantial improvements over a mature and highly tuned constraint-based information extraction system for financial language text. While we used time series databases to derive measures of consistency for candidate extractions, our set-up can easily be applied to a variety of other information extraction tasks for which potentially noisy reference data is available."
],
[
"We would like to thank my managers Alex Bozic, Tim Phelan and Joshwini Pereira for supporting this project, as well as David Rosenberg from the CTO's office for providing access to GPU infrastructure."
]
],
"section_name": [
"Information extraction in finance",
"Our contribution",
"Overview",
"Neural network input and architecture",
"Training and database supervision",
"Results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"4aca828cb9e75dc5aad3baa36f8e1f56d88d656d",
"ffec76ef7475b836fe7694091af053af7625fc8c"
],
"answer": [
{
"evidence": [
"In a production setting, the neural architecture presented here reduced the number of false positive extractions in financial information extraction application by INLINEFORM0 relative to a mature system developed over the course of several years."
],
"extractive_spans": [],
"free_form_answer": "By more than 90%",
"highlighted_evidence": [
"In a production setting, the neural architecture presented here reduced the number of false positive extractions in financial information extraction application by INLINEFORM0 relative to a mature system developed over the course of several years."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The full pipeline, deployed in a production setting, resulted in a reduction in false positives of more than INLINEFORM0 in the extractions produced by our pipeline. The drop in recall relative to the production system was smaller than INLINEFORM1 ."
],
"extractive_spans": [],
"free_form_answer": "false positives improved by 90% and recall improved by 1%",
"highlighted_evidence": [
"The full pipeline, deployed in a production setting, resulted in a reduction in false positives of more than INLINEFORM0 in the extractions produced by our pipeline. The drop in recall relative to the production system was smaller than INLINEFORM1 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"05739aac3c3b891450be1e83bfe56d5a4451d314",
"4fe97202a908469d4a612f064addd0a541a1c1a0"
],
"answer": [
{
"evidence": [
"We propose to train the neural network by referencing candidates extracted by a high-recall candidate-generating parser against a potentially noisy reference source (see Figure FIGREF12 , left panel). In our application, this reference was a database containing historical time series data, which enabled us to check how well the extracted numerical data fit into time series in the database. Concretely, we compute a consistency score INLINEFORM0 that measures the degree of consistency with the database. Depending on the application, the score may for instance be a squared relative error, an absolute error, or a more complex error function. In many applications, the score INLINEFORM1 will be noisy (see below for further discussion). We threshold INLINEFORM2 to obtain binary correctness labels INLINEFORM3 . We then use the binary correctness labels INLINEFORM4 for supervised neural network training, with binary cross-entropy loss as the loss function. This allows us to train a network that can compute a pseudo-likelihood INLINEFORM5 of a given extraction candidate to agree with the database. Thus, INLINEFORM6 estimates how likely the extraction candidate is correct."
],
"extractive_spans": [
"database containing historical time series data"
],
"free_form_answer": "",
"highlighted_evidence": [
"In our application, this reference was a database containing historical time series data, which enabled us to check how well the extracted numerical data fit into time series in the database. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We propose to train the neural network by referencing candidates extracted by a high-recall candidate-generating parser against a potentially noisy reference source (see Figure FIGREF12 , left panel). In our application, this reference was a database containing historical time series data, which enabled us to check how well the extracted numerical data fit into time series in the database. Concretely, we compute a consistency score INLINEFORM0 that measures the degree of consistency with the database. Depending on the application, the score may for instance be a squared relative error, an absolute error, or a more complex error function. In many applications, the score INLINEFORM1 will be noisy (see below for further discussion). We threshold INLINEFORM2 to obtain binary correctness labels INLINEFORM3 . We then use the binary correctness labels INLINEFORM4 for supervised neural network training, with binary cross-entropy loss as the loss function. This allows us to train a network that can compute a pseudo-likelihood INLINEFORM5 of a given extraction candidate to agree with the database. Thus, INLINEFORM6 estimates how likely the extraction candidate is correct."
],
"extractive_spans": [
"a database containing historical time series data"
],
"free_form_answer": "",
"highlighted_evidence": [
" In our application, this reference was a database containing historical time series data, which enabled us to check how well the extracted numerical data fit into time series in the database. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"55dc97d9ba671dd8a75e7d0a7997dd0ebc31498d",
"e724dcec6d82137d68c0cec6031f6bff94afeadf"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We present an information extraction architecture that augments a candidate-generating parser with a deep neural network. The candidate-generating parser may leverage constraints. At the same time, the architecture gains the neural networks's ability to leverage large amounts of data to learn complex features that are tuned for the application at hand. Our method assumes the existence of a potentially noisy source of supervision INLINEFORM0 , e.g. via consistency checks of extracted data against existing databases, or via human interaction. This supervision is used to train the neural network."
],
"extractive_spans": [
"candidate-generating parser "
],
"free_form_answer": "",
"highlighted_evidence": [
"We present an information extraction architecture that augments a candidate-generating parser with a deep neural network. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"by how much did the system improve?",
"what existing databases were used?",
"what existing parser is used?"
],
"question_id": [
"4d824b49728649432371ecb08f66ba44e50569e0",
"02a5acb484bda77ef32a13f5d93d336472cf8cd4",
"863d8d32a1605402e11f0bf63968a14bcfd15337"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 2: Training set-up (left) and execution (right). Blocks marked “L” are neural network LSTM cells, while blocks marked “F” are fully connected layers.",
"Figure 3: We use a neural network comprised of an LSTM (Hochreiter and Schmidhuber 1997), which processes encoded parser output and a section of document text character-by-character, a fully connected layer (FC, blue) that takes documentlevel features as input, and a fully connected layer (FC, grey) that takes the output vectors of the previous two layers as input to generate a correctness estimate for the extraction candidate. The final fully connected layer uses a sigmoid activation function to generate a correctness estimate ỹ ∈ (0, 1), from which we compute the network correctness score as s̃ := σ−1(ỹ)."
],
"file": [
"3-Figure2-1.png",
"3-Figure3-1.png"
]
} | [
"by how much did the system improve?"
] | [
[
"1612.04118-Our contribution-5",
"1612.04118-Results-0"
]
] | [
"false positives improved by 90% and recall improved by 1%"
] | 245 |
1804.01155 | Socioeconomic Dependencies of Linguistic Patterns in Twitter: A Multivariate Analysis | Our usage of language is not solely reliant on cognition but is arguably determined by myriad external factors leading to a global variability of linguistic patterns. This issue, which lies at the core of sociolinguistics and is backed by many small-scale studies on face-to-face communication, is addressed here by constructing a dataset combining the largest French Twitter corpus to date with detailed socioeconomic maps obtained from national census in France. We show how key linguistic variables measured in individual Twitter streams depend on factors like socioeconomic status, location, time, and the social network of individuals. We found that (i) people of higher socioeconomic status, active to a greater degree during the daytime, use a more standard language; (ii) the southern part of the country is more prone to use more standard language than the northern one, while locally the used variety or dialect is determined by the spatial distribution of socioeconomic status; and (iii) individuals connected in the social network are closer linguistically than disconnected ones, even after the effects of status homophily have been removed. Our results inform sociolinguistic theory and may inspire novel learning methods for the inference of socioeconomic status of people from the way they tweet. | {
"paragraphs": [
[
"Communication is highly variable and this variability contributes to language change and fulfills social functions. Analyzing and modeling data from social media allows the high-resolution and long-term follow-up of large samples of speakers, whose social links and utterances are automatically collected. This empirical basis and long-standing collaboration between computer and social scientists could dramatically extend our understanding of the links between language variation, language change, and society.",
"Languages and communication systems of several animal species vary in time, geographical space, and along social dimensions. Varieties are shared by individuals frequenting the same space or belonging to the same group. The use of vocal variants is flexible. It changes with the context and the communication partner and functions as \"social passwords\" indicating which individual is a member of the local group BIBREF0 . Similar patterns can be found in human languages if one considers them as evolving and dynamical systems that are made of several social or regional varieties, overlapping or nested into each other. Their emergence and evolution result from their internal dynamics, contact with each other, and link formation within the social organization, which itself is evolving, composite and multi-layered BIBREF1 , BIBREF2 .",
"The strong tendency of communication systems to vary, diversify and evolve seems to contradict their basic function: allowing mutual intelligibility within large communities over time. Language variation is not counter adaptive. Rather, subtle differences in the way others speak provide critical cues helping children and adults to organize the social world BIBREF3 . Linguistic variability contributes to the construction of social identity, definition of boundaries between social groups and the production of social norms and hierarchies.",
"Sociolinguistics has traditionally carried out research on the quantitative analysis of the so-called linguistic variables, i.e. points of the linguistic system which enable speakers to say the same thing in different ways, with these variants being \"identical in reference or truth value, but opposed in their social [...] significance\" BIBREF4 . Such variables have been described in many languages: variable pronunciation of -ing as [in] instead of [iŋ] in English (playing pronounced playin'); optional realization of the first part of the French negation (je (ne) fume pas, \"I do not smoke\"); optional realization of the plural ending of verb in Brazilian Portuguese (eles disse(ram), \"they said\"). For decades, sociolinguistic studies have showed that hearing certain variants triggers social stereotypes BIBREF5 . The so-called standard variants (e.g. [iŋ], realization of negative ne and plural -ram) are associated with social prestige, high education, professional ambition and effectiveness. They are more often produced in more formal situation. Non-standard variants are linked to social skills, solidarity and loyalty towards the local group, and they are produced more frequently in less formal situation.",
"It is therefore reasonable to say that the sociolinguistic task can benefit from the rapid development of computational social science BIBREF6 : the similarity of the online communication and face-to-face interaction BIBREF7 ensures the validity of the comparison with previous works. In this context, the nascent field of computational sociolinguistics found the digital counterparts of the sociolinguistic patterns already observed in spoken interaction. However a closer collaboration between computer scientists and sociolinguists is needed to meet the challenges facing the field BIBREF8 :",
"The present work meets most of these challenges. It constructs the largest dataset of French tweets enriched with census sociodemographic information existent to date to the best of our knowledge. From this dataset, we observed variation of two grammatical cues and an index of vocabulary size in users located in France. We study how the linguistic cues correlated with three features reflective of the socioeconomic status of the users, their most representative location and their daily periods of activity on Twitter. We also observed whether connected people are more linguistically alike than disconnected ones. Multivariate analysis shows strong correlations between linguistic cues and socioeconomic status as well as a broad spatial pattern never observed before, with more standard language variants and lexical diversity in the southern part of the country. Moreover, we found an unexpected daily cyclic evolution of the frequency of standard variants. Further analysis revealed that the observed cycle arose from the ever changing average economic status of the population of users present in Twitter through the day. Finally, we were able to establish that linguistic similarity between connected people does arises partially but not uniquely due to status homophily (users with similar socioeconomic status are linguistically similar and tend to connect). Its emergence is also due to other effects potentially including other types of homophilic correlations or influence disseminated over links of the social network. Beyond we verify the presence of status homophily in the Twitter social network our results may inform novel methods to infer socioeconomic status of people from the way they use language. Furthermore, our work, rooted within the web content analysis line of research BIBREF9 , extends the usual focus on aggregated textual features (like document frequency metrics or embedding methods) to specific linguistic markers, thus enabling sociolinguistics knowledge to inform the data collection process."
],
[
"For decades, sociolinguistic studies have repeatedly shown that speakers vary the way they talk depending on several factors. These studies have usually been limited to the analysis of small scale datasets, often obtained by surveying a set of individuals, or by direct observation after placing them in a controlled experimental setting. In spite of the volume of data collected generally, these studies have consistently shown the link between linguistic variation and social factors BIBREF10 , BIBREF11 .",
"Recently, the advent of social media and publicly available communication platforms has opened up a new gate to access individual information at a massive scale. Among all available social platforms, Twitter has been regarded as the choice by default, namely thanks to the intrinsic nature of communications taking place through it and the existence of data providers that are able to supply researchers with the volume of data they require. Work previously done on demographic variation is now relying increasingly on corpora from this social media platform as evidenced by the myriad of results showing that this resource reflects not only morpholexical variation of spoken language but also geographical BIBREF12 , BIBREF13 .",
"Although the value of this kind of platform for linguistic analysis has been more than proven, the question remains on how previous sociolinguistic results scale up to the sheer amount of data within reach and how can the latter enrich the former. To do so, numerous studies have focused on enhancing the data emanating from Twitter itself. Indeed, one of the core limitations of Twitter is the lack of reliable sociodemographic information about the sampled users as usually data fields such as user-entered profile locations, gender or age differ from reality. This in turn implies that user-generated profile content cannot be used as a useful proxy for the sociodemographic information BIBREF14 .",
"Many studies have overcome this limitation by taking advantage of the geolocation feature allowing Twitter users to include in their posts the location from which they were tweeted. Based on this metadata, studies have been able to assign home location to geolocated users with varying degrees of accuracy BIBREF15 . Subsequent work has also been devoted to assigning to each user some indicator that might characterize their socioeconomic status based on their estimated home location. These indicators are generally extracted from other datasets used to complete the Twitter one, namely census data BIBREF16 , BIBREF12 , BIBREF17 or real estate online services as Zillow.com BIBREF18 . Other approaches have also relied on sources of socioeconomic information such as the UK Standard Occupation Classification (SOC) hierarchy, to assign socioeconomic status to users with occupation mentions BIBREF19 . Despite the relative success of these methods, their common limitation is to provide observations and predictions based on a carefully hand-picked small set of users, letting alone the problem of socioeconomic status inference on larger and more heterogeneous populations. Our work stands out from this well-established line of research by expanding the definition of socioeconomic status to include several demographic features as well as by pinpointing potential home location to individual users with an unprecedented accuracy. Identifying socioeconomic status and the network effects of homophily BIBREF20 is an open question BIBREF21 . However, recent results already showed that status homophily, i.e. the tendency of people of similar socioeconomic status are better connected among themselves, induce structural correlations which are pivotal to understand the stratified structure of society BIBREF22 . While we verify the presence of status homophily in the Twitter social network, we detect further sociolinguistic correlations between language, location, socioeconomic status, and time, which may inform novel methods to infer socioeconomic status for a broader set of people using common information available on Twitter."
],
[
"One of the main achievements of our study was the construction of a combined dataset for the analysis of sociolinguistic variables as a function of socioeconomic status, geographic location, time, and the social network. As follows, we introduce the two aforementioned independent datasets and how they were combined. We also present a brief cross-correlation analysis to ground the validity of our combined dataset for the rest of the study. In what follows, it should also be noted that regression analysis was performed via linear regression as implemented in the Scikit Learn Toolkit while data preprocessing and network study were performed using respectively pandas BIBREF23 and NetworkX BIBREF24 Python libraries."
],
[
"Our first dataset consists of a large data corpus collected from the online news and social networking service, Twitter. On it, users can post and interact with messages, \"tweets\", restricted to 140 characters. Tweets may come with several types of metadata including information about the author's profile, the detected language, where and when the tweet was posted, etc. Specifically, we recorded 170 million tweets written in French, posted by $2.5$ million users in the timezones GMT and GMT+1 over three years (between July 2014 to May 2017). These tweets were obtained via the Twitter powertrack API feeds provided by Datasift and Gnip with an access rate varying between $15-25\\%$ .",
"To obtain meaningful linguistic data we preprocessed the incoming tweet stream in several ways. As our central question here deals with the variability of the language, repeated tweets do not bring any additional information to our study. Therefore, as an initial filtering step, we decided to remove retweets. Next, in order to facilitate the detection of the selected linguistic markers we removed any URLs, emoticons, mentions of other users (denoted by the @ symbol) and hashtags (denoted by the # symbol) from each tweet. These expressions were not considered to be semantically meaningful and their filtering allowed to further increase the speed and accuracy of our linguistic detection methods when run across the data. In addition we completed a last step of textual preprocessing by down-casing and stripping the punctuation out of the tweets body. POS-taggers such as MElt BIBREF25 were also tested but they provided no significant improvement in the detection of the linguistic markers.",
"We used the collected tweets in another way to infer social relationships between users. Tweet messages may be direct interactions between users, who mention each other in the text by using the @ symbol (@username). When one user $u$ , mentions another user $v$ , user $v$ will see the tweet posted by user $u$ directly in his / her feed and may tweet back. In our work we took direct mentions as proxies of social interactions and used them to identify social ties between pairs of users. Opposite to the follower network, reflecting passive information exposure and less social involvement, the mutual mention network has been shown BIBREF26 to capture better the underlying social structure between users. We thus use this network definition in our work as links are a greater proxy for social interactions.",
"In our definition we assumed a tie between users if they mutually mentioned each other at least once during the observation period. People who reciprocally mentioned each other express some mutual interest, which may be a stronger reflection of real social relationships as compared to the non-mutual cases BIBREF27 . This constraint reduced the egocentric social network considerably leading to a directed structure of $508,975$ users and $4,029,862$ links that we considered being undirected in what follows.",
"About $2\\%$ of tweets included in our dataset contained some location information regarding either the tweet author's self-provided position or the place from which the tweet was posted. These pieces of information appeared as the combination of self reported locations or usual places tagged with GPS coordinates at different geographic resolution. We considered only tweets which contained the exact GPS coordinates with resolution of $\\sim 3$ meters of the location where the actual tweet was posted. This actually means that we excluded tweets where the user assigned a place name such as \"Paris\" or \"France\" to the location field, which are by default associated to the geographical center of the tagged areas. Practically, we discarded coordinates that appeared more than 500 times throughout the whole GPS-tagged data, assuming that there is no such $3\\times 3$ meter rectangle in the country where 500 users could appear and tweet by chance. After this selection procedure we rounded up each tweet location to a 100 meter precision.",
"To obtain a unique representative location of each user, we extracted the sequence of all declared locations from their geolocated tweets. Using this set of locations we selected the most frequent to be the representative one, and we took it as a proxy for the user's home location. Further we limited our users to ones located throughout the French territory thus not considering others tweeting from places outside the country. This selection method provided us with $110,369$ geolocated users who are either detected as French speakers or assigned to be such by Twitter and all associated to specific 'home' GPS coordinates in France. To verify the spatial distribution of the selected population, we further assessed the correlations between the true population distributions (obtained from census data BIBREF28 ) at different administrative level and the geolocated user distribution aggregated correspondingly. More precisely, we computed the $R^2$ coefficient of variation between the inferred and official population distributions (a) at the level of 22 regions. Correlations at this level induced a high coefficient of $R^2\\simeq 0.89$ ( $p<10^{-2}$ ); (b) At the arrondissement level with 322 administrative units and coefficient $R^2\\simeq 0.87$ ( $p<10^{-2}$ ); and (c) at the canton level with 4055 units with a coefficient $R\\simeq 0.16$ ( $p<10^{-2}$ ). Note that the relatively small coefficient at this level is due to the interplay of the sparsity of the inferred data and the fine grained spatial resolution of cantons. All in all, we can conclude that our sample is highly representative in terms of spatial population distribution, which at the same time validate our selection method despite the potential inherent biases induced by the method taking the most frequented GPS coordinates as the user's home location."
],
[
"The second dataset we used was released in December 2016 by the National Institute of Statistics and Economic Studies (INSEE) of France. This data corpus BIBREF29 contains a set of sociodemographic aggregated indicators, estimated from the 2010 tax return in France, for each 4 hectare ( $200m \\times 200m$ ) square patch across the whole French territory. Using these indicators, one can estimate the distribution of the average socioeconomic status (SES) of people with high spatial resolution. In this study, we concentrated on three indicators for each patch $i$ , which we took to be good proxies of the socioeconomic status of the people living within them. These were the $S^i_\\mathrm {inc}$ average yearly income per capita (in euros), the $S^i_{\\mathrm {own}}$ fraction of owners (not renters) of real estate, and the $S^i_\\mathrm {den}$ density of population defined respectively as ",
"$$:\nS^i_\\mathrm {inc}=\\frac{{S}^i_{hh}}{{N}^i_{hh}}, \\hspace{10.84006pt} S^i_\\mathrm {own}=\\frac{N^i_\\mathrm {own}}{N^i}, \\hspace{10.84006pt}\\mbox{and}\\hspace{10.84006pt} S^i_\\mathrm {den}=\\frac{N^i}{(200m)^2}.$$ (Eq. 13) ",
"Here ${S}^i_{hh}$ and ${N}^i_{hh}$ assign respectively the cumulative income and total number of inhabitants of patch $i$ , while $N^i_\\mathrm {own}$ and $N^i$ are respectively the number of real estate owners and the number of individuals living in patch $i$ . As an illustration we show the spatial distribution of $S^i_\\mathrm {inc}$ average income over the country in Fig. 1 a.",
"In order to uphold current privacy laws and due to the highly sensitive nature of the disclosed data, some statistical pretreatments were applied to the data by INSEE before its public release. More precisely, neighboring patches with less than 11 households were merged together, while some of the sociodemographic indicators were winsorized. This set of treatments induced an inherent bias responsible for the deviation of the distribution of some of the socioeconomic indicators. These quantities were expected to be determined by the Pareto principle, thus reflecting the high level of socioeconomic imbalances present within the population. Instead, as shown in Fig. 1 b [diagonal panels], distributions of the derived socioeconomic indicators (in blue) appeared somewhat more symmetric than expected. This doesn't hold though for $P(S^i_\\mathrm {den})$ (shown on a log-log scale in the lowest right panel of Fig. 1 b), which emerged with a broad tail similar to an expected power-law Pareto distribution. In addition, although the patches are relatively small ( $200m \\times 200m$ ), the socioeconomic status of people living may have some local variance, what we cannot consider here. Nevertheless, all things considered, this dataset and the derived socioeconomic indicators yield the most fine-grained description, allowed by national law, about the population of France over its whole territory.",
"Despite the inherent biases of the selected socioeconomic indicators, in general we found weak but significant pairwise correlations between these three variables as shown in the upper diagonal panels in Fig. 1 b (in red), with values in Table 1 . We observed that while $S_\\mathrm {inc}^{i}$ income and $S_\\mathrm {own}^{i}$ owner ratio are positively correlated ( $R=0.24$ , $p<10^{-2}$ ), and the $S_\\mathrm {own}^{i}$ and $S_\\mathrm {den}^{i}$ population density are negatively correlated ( $R=-0.23$ , $p<10^{-2}$ ), $S_\\mathrm {inc}^{i}$ and $S_\\mathrm {den}^{i}$ appeared to be very weakly correlated ( $S_\\mathrm {own}^{i}$0 , $S_\\mathrm {own}^{i}$1 ). This nevertheless suggested that high average income, high owner ratio, and low population density are consistently indicative of high socioeconomic status in the dataset. [subfigure]justification=justified,singlelinecheck=false"
],
[
"Data collected from Twitter provides a large variety of information about several users including their tweets, which disclose their interests, vocabulary, and linguistic patterns; their direct mentions from which their social interactions can be inferred; and the sequence of their locations, which can be used to infer their representative location. However, no information is directly available regarding their socioeconomic status, which can be pivotal to understand the dynamics and structure of their personal linguistic patterns.",
"To overcome this limitation we combined our Twitter data with the socioeconomic maps of INSEE by assigning each geolocated Twitter user to a patch closest to their estimated home location (within 1 km). This way we obtained for all $110,369$ geolocated users their dynamical linguistic data, their egocentric social network as well as a set of SES indicators.",
"Such a dataset associating language with socioeconomic status and social network throughout the French metropolitan territory is unique to our knowledge and provides unrivaled opportunities to verify sociolinguistic patterns observed over a long period on a small-scale, but never established in such a large population.",
"To verify whether the geolocated Twitter users yet provide a representative sample of the whole population we compared the distribution and correlations of the their SES indicators to the population measures. Results are shown in Fig. 1 b diagonal (red distributions) and lower diagonal panels (in blue) with correlation coefficients and $p$ -values summarized in Table. 1 . Even if we observed some discrepancy between the corresponding distributions and somewhat weaker correlations between the SES indicators, we found the same significant correlation trends (with the exception of the pair density / income) as the ones seen when studying the whole population, assuring us that each indicator correctly reflected the SES of individuals."
],
[
"We identified the following three linguistic markers to study across users from different socioeconomic backgrounds: Correlation with SES has been evidenced for all of them. The optional deletion of negation is typical of spoken French, whereas the omission of the mute letters marking the plural in the nominal phrase is a variable cue of French writing. The third linguistic variable is a global measure of the lexical diversity of the Twitter users. We present them here in greater detail."
],
[
"The basic form of negation in French includes two negative particles: ne (no) before the verb and another particle after the verb that conveys more accurate meaning: pas (not), jamais (never), personne (no one), rien (nothing), etc. Due to this double construction, the first part of the negation (ne) is optional in spoken French, but it is obligatory in standard writing. Sociolinguistic studies have previously observed the realization of ne in corpora of recorded everyday spoken interactions. Although all the studies do not converge, a general trend is that ne realization is more frequent in speakers with higher socioeconomic status than in speakers with lower status BIBREF30 , BIBREF31 . We built upon this research to set out to detect both negation variants in the tweets using regular expressions. We are namely interested in the rate of usage of the standard negation (featuring both negative particles) across users: ",
"$$L^u_{\\mathrm {cn}}=\\frac{n^u_{\\mathrm {cn}}}{n^u_{\\mathrm {cn}}+n^u_{\\mathrm {incn}}} \\hspace{14.45377pt} \\mbox{and} \\hspace{14.45377pt} \\overline{L}^{i}_{\\mathrm {cn}}=\\frac{\\sum _{u\\in i}L^u_{\\mathrm {cn}}}{N_i},$$ (Eq. 18) ",
"where $n^{u}_{\\mathrm {cn}}$ and $n^{u}_{\\mathrm {incn}}$ assign the number of correct negation and incorrect number of negation of user $u$ , thus $L_{\\mathrm {cn}}^u$ defines the rate of correct negation of a users and $\\overline{L}_{\\mathrm {cn}}^i$ its average over a selected $i$ group (like people living in a given place) of $N_i$ users."
],
[
"In written French, adjectives and nouns are marked as being plural by generally adding the letters s or x at the end of the word. Because these endings are mute (without counterpart in spoken French), their omission is the most frequent spelling error in adults BIBREF32 . Moreover, studies showed correlations between standard spelling and social status of the writers, in preteens, teens and adults BIBREF33 , BIBREF32 , BIBREF34 . We then set to estimate the use of standard plural across users: ",
"$$L^u_{\\mathrm {cp}}=\\frac{n^u_{\\mathrm {cp}}}{n^u_{\\mathrm {cp}}+n^u_{\\mathrm {incp}}} \\hspace{14.45377pt} \\mbox{and} \\hspace{14.45377pt} \\overline{L}^{i}_{\\mathrm {cp}}=\\frac{\\sum _{u\\in i}L^u_{\\mathrm {cp}}}{N_i}$$ (Eq. 20) ",
"where the notation follows as before ( $\\mathrm {cp}$ stands for correct plural and $\\mathrm {incp}$ stands for incorrect plural)."
],
[
"A positive relationship between an adult's lexical diversity level and his or her socioeconomic status has been evidenced in the field of language acquisition. Specifically, converging results showed that the growth of child lexicon depends on the lexical diversity in the speech of the caretakers, which in turn is related to their socioeconomic status and their educational level BIBREF35 , BIBREF36 . We thus proceeded to study the following metric: ",
"$$L^u_\\mathrm {vs}=\\frac{N^u_\\mathrm {vs}}{N^u_{tw}} \\hspace{14.45377pt} \\mbox{and} \\hspace{14.45377pt} \\overline{L}^{i}_\\mathrm {vs}=\\frac{\\sum _{u\\in i}N^u_\\mathrm {vs}}{N_i},$$ (Eq. 22) ",
"where $N_vs^u$ assigns the total number of unique words used by user $u$ who tweeted $N_{tw}^u$ times during the observation period. As such $L_\\mathrm {vs}^u$ gives the normalized vocabulary set size of a user $u$ , while $\\overline{L}_\\mathrm {vs}^i$ defines its average for a population $i$ ."
],
[
"By measuring the defined linguistic variables in the Twitter timeline of users we were finally set to address the core questions of our study, which dealt with linguistic variation. More precisely, we asked whether the language variants used online depend on the socioeconomic status of the users, on the location or time of usage, and on ones social network. To answer these questions we present here a multidimensional correlation study on a large set of Twitter geolocated users, to which we assigned a representative location, three SES indicators, and a set of meaningful social ties based on the collection of their tweets."
],
[
"The socioeconomic status of a person is arguably correlated with education level, income, habitual location, or even with ethnicity and political orientation and may strongly determine to some extent patterns of individual language usage. Such dependencies have been theoretically proposed before BIBREF11 , but have rarely been inspected at this scale yet. The use of our previously described datasets enabled us to do so via the measuring of correlations between the inferred SES indicators of Twitter users and the use of the previously described linguistic markers.",
"To compute and visualize these correlations we defined linear bins (in numbers varying from 20 to 50) for the socioeconomic indicators and computed the average of the given linguistic variables for people falling within the given bin. These binned values (shown as symbols in Fig. 2 ) were used to compute linear regression curves and the corresponding confidence intervals (see Fig. 2 ). An additional transformation was applied to the SES indicator describing population density, which was broadly distributed (as discussed in Section \"INSEE dataset: socioeconomic features\" and Fig. 1 b), thus, for the regression process, the logarithm of its values were considered. To quantify pairwise correlations we computed the $R^2$ coefficient of determination values in each case.",
"In Fig. 2 we show the correlation plots of all nine pairs of SES indicators and linguistic variables together with the linear regression curves, the corresponding $R^2$ values and the 95 percentile confidence intervals (note that all values are also in Table 2 ). These results show that correlations between socioeconomic indicators and linguistic variables actually exist. Furthermore, these correlation trends suggest that people with lower SES may use more non-standard expressions (higher rates of incorrect negation and plural forms) have a smaller vocabulary set size than people with higher SES. Note that, although the observed variation of linguistic variables were limited, all the correlations were statistically significant ( $p<10^{-2}$ ) with considerably high $R^2$ values ranging from $0.19$ (between $\\overline{L}_{\\mathrm {cn}}\\sim S_\\mathrm {inc}$ ) to $0.76$ (between $\\overline{L}_{\\mathrm {cp}}\\sim S_\\mathrm {den}$ ). For the rates of standard negation and plural terms the population density appeared to be the most determinant indicator with $R^2=0.74$ (and $0.76$ respectively), while for the vocabulary set size the average income provided the highest correlation (with $R^2=0.7$ ).",
"One must also acknowledge that while these correlations exhibit high values consistently across linguistic and socioeconomic indicators, they only hold meaning at the population level at which the binning was performed. When the data is considered at the user level, the variability of individual language usage hinders the observation of the aforementioned correlation values (as demonstrated by the raw scatter plots (grey symbols) in Fig. 2 )."
],
[
"Next we chose to focus on the spatial variation of linguistic variables. Although officially a standard language is used over the whole country, geographic variations of the former may exist due to several reasons BIBREF37 , BIBREF38 . For instance, regional variability resulting from remnants of local languages that have disappeared, uneven spatial distribution of socioeconomic potentials, or influence spreading from neighboring countries might play a part in this process. For the observation of such variability, by using their representative locations, we assigned each user to a department of France. We then computed the $\\overline{L}^{i}_{\\mathrm {cn}}$ (resp. $\\overline{L}^{i}_{\\mathrm {cp}}$ ) average rates of standard negation (resp. plural agreement) and the $\\overline{L}^{i}_\\mathrm {vs}$ average vocabulary set size for each \"département\" $i$ in the country (administrative division of France – There are 97 départements).",
"Results shown in Fig. 3 a-c revealed some surprising patterns, which appeared to be consistent for each linguistic variable. By considering latitudinal variability it appeared that, overall, people living in the northern part of the country used a less standard language, i.e., negated and pluralized less standardly, and used a smaller number of words. On the other hand, people from the South used a language which is somewhat closer to the standard (in terms of the aforementioned linguistic markers) and a more diverse vocabulary. The most notable exception is Paris, where in the city center people used more standard language, while the contrary is true for the suburbs. This observation, better shown in Fig. 3 a inset, can be explained by the large differences in average socioeconomic status between districts. Such segregation is known to divide the Eastern and Western sides of suburban Paris, and in turn to induce apparent geographic patterns of standard language usage. We found less evident longitudinal dependencies of the observed variables. Although each variable shows a somewhat diagonal trend, the most evident longitudinal dependency appeared for the average rate of standard pluralization (see Fig. 3 b), where users from the Eastern side of the country used the language in less standard ways. Note that we also performed a multivariate regression analysis (not shown here), using the linguistic markers as target and considering as factors both location (in terms of latitude and longitude) as and income as proxy of socioeconomic status. It showed that while location is a strong global determinant of language variability, socioeconomic variability may still be significant locally to determine standard language usage (just as we demonstrated in the case of Paris)."
],
[
"Another potentially important factor determining language variability is the time of day when users are active in Twitter BIBREF39 , BIBREF40 . The temporal variability of standard language usage can be measured for a dynamical quantity like the $L_{\\mathrm {cn}}(t)$ rate of correct negation. To observe its periodic variability (with a $\\Delta T$ period of one week) over an observation period of $T$ (in our case 734 days), we computed ",
"$$\\overline{L}^{\\Lambda }_{\\mathrm {cn}}(t)=\\frac{\\Delta T}{|\\Lambda |T}\\sum _{u\\in \\Lambda }\\sum _{k=0}^{\\left\\lfloor {T/\\Delta T}\\right\\rfloor }L_{\\mathrm {cn}}^{u}(t+k\\Delta T),$$ (Eq. 29) ",
"in a population $\\Lambda $ of size $|\\Lambda |$ with a time resolution of one hour. This quantity reflects the average standard negation rate in an hour over the week in the population $\\Lambda $ . Note that an equivalent $\\overline{L}^{\\Lambda }_{\\mathrm {cp}}(t)$ measure can be defined for the rate of standard plural terms, but not for the vocabulary set size as it is a static variable.",
"In Fig. 4 a and b we show the temporal variability of $\\overline{L}^{\\Lambda }_{\\mathrm {cn}}(t)$ and $\\overline{L}^{\\Lambda }_{\\mathrm {cp}}(t)$ (respectively) computed for the whole Twitter user set ( $\\Gamma =all$ , solid line) and for geolocated users ( $\\Gamma =geo$ , dashed lines). Not surprisingly, these two curves were strongly correlated as indicated by the high Pearson correlation coefficients summarized in the last column of Table 3 which, again, assured us that our geolocated sample of Twitter users was representative of the whole set of users. At the same time, the temporal variability of these curves suggested that people tweeting during the day used a more standard language than those users who are more active during the night. However, after measuring the average income of active users in a given hour over a week, we obtained an even more sophisticated picture. It turned out that people active during the day have higher average income (warmer colors in Fig. 4 ) than people active during the night (colder colors in Fig. 4 ). Thus the variability of standard language patterns was largely explained by the changing overall composition of active Twitter users during different times of day and the positive correlation between socioeconomic status and the usage of higher linguistic standards (that we have seen earlier). This explanation was supported by the high coefficients (summarized in Table 3 ), which were indicative of strong and significant correlations between the temporal variability of average linguistic variables and average income of the active population on Twitter."
],
[
"Finally we sought to understand the effect of the social network on the variability of linguistic patterns. People in a social structure can be connected due to several reasons. Link creation mechanisms like focal or cyclic closure BIBREF41 , BIBREF42 , or preferential attachment BIBREF43 together with the effects of homophily BIBREF44 are all potentially driving the creation of social ties and communities, and the emergence of community rich complex structure within social networks. In terms of homophily, one can identify several individual characteristics like age, gender, common interest or political opinion, etc., that might increase the likelihood of creating relationships between disconnected but similar people, who in turn influence each other and become even more similar. Status homophily between people of similar socioeconomic status has been shown to be important BIBREF22 in determining the creation of social ties and to explain the stratified structure of society. By using our combined datasets, we aim here to identify the effects of status homophily and to distinguish them from other homophilic correlations and the effects of social influence inducing similarities among already connected people.",
"To do so, first we took the geolocated Twitter users in France and partitioned them into nine socioeconomic classes using their inferred income $S_\\mathrm {inc}^u$ . Partitioning was done first by sorting users by their $S^u_\\mathrm {inc}$ income to calculate their $C(S^u_\\mathrm {inc})$ cumulative income distribution function. We defined socioeconomic classes by segmenting $C(S^u_\\mathrm {inc})$ such that the sum of income is the same for each classes (for an illustration of our method see Fig. 6 a in the Appendix). We constructed a social network by considering mutual mention links between these users (as introduced in Section \"Data Description\" ). Taking the assigned socioeconomic classes of connected individuals, we confirmed the effects of status homophily in the Twitter mention network by computing the connection matrix of socioeconomic groups normalized by the equivalent matrix of corresponding configuration model networks, which conserved all network properties except structural correlations (as explained in the Appendix). The diagonal component in Fig. 6 matrix indicated that users of similar socioeconomic classes were better connected, while people from classes far apart were less connected than one would expect by chance from the reference model with users connected randomly.",
"In order to measure linguistic similarities between a pair of users $u$ and $v$ , we simply computed the $|L^{u}_{*}-L^{v}_{*}|$ absolute difference of their corresponding individual linguistic variable $*\\in \\lbrace \\mathrm {cn},\\mathrm {cp},vs\\rbrace $ . This measure appeared with a minimum of 0 and associated smaller values to more similar pairs of users. To identify the effects of status homophily and the social network, we proceeded by computing the similarity distribution in four cases: for connected users from the same socioeconomic class; for disconnected randomly selected pairs of users from the same socioeconomic class; for connected users in the network; and randomly selected pairs of disconnected users in the network. Note that in each case the same number of user pairs were sampled from the network to obtain comparable averages. This number was naturally limited by the number of connected users in the smallest socioeconomic class, and were chosen to be $10,000$ in each cases. By comparing the distributions shown in Fig. 5 we concluded that (a) connected users (red and yellow bars) were the most similar in terms of any linguistic marker. This similarity was even greater when the considered tie was connecting people from the same socioeconomic group; (b) network effects can be quantified by comparing the most similar connected (red bar) and disconnected (light blue bar) users from the same socioeconomic group. Since the similarity between disconnected users here is purely induced by status homophily, the difference of these two bars indicates additional effects that cannot be explained solely by status homophily. These additional similarities may rather be induced by other factors such as social influence, the physical proximity of users within a geographical area or other homophilic effects that were not accounted for. (c) Randomly selected pairs of users were more dissimilar than connected ones as they dominated the distributions for larger absolute difference values. We therefore concluded that both the effects of network and status homophily mattered in terms of linguistic similarity between users of this social media platform."
],
[
"The overall goal of our study was to explore the dependencies of linguistic variables on the socioeconomic status, location, time varying activity, and social network of users. To do so we constructed a combined dataset from a large Twitter data corpus, including geotagged posts and proxy social interactions of millions of users, as well as a detailed socioeconomic map describing average socioeconomic indicators with a high spatial resolution in France. The combination of these datasets provided us with a large set of Twitter users all assigned to their Twitter timeline over three years, their location, three individual socioeconomic indicators, and a set of meaningful social ties. Three linguistic variables extracted from individual Twitter timelines were then studied as a function of the former, namely, the rate of standard negation, the rate of plural agreement and the size of vocabulary set.",
"Via a detailed multidimensional correlation study we concluded that (a) socioeconomic indicators and linguistic variables are significantly correlated. i.e. people with higher socioeconomic status are more prone to use more standard variants of language and a larger vocabulary set, while people on the other end of the socioeconomic spectrum tend to use more non-standard terms and, on average, a smaller vocabulary set; (b) Spatial position was also found to be a key feature of standard language use as, overall, people from the North tended to use more non-standard terms and a smaller vocabulary set compared to people from the South; a more fine-grained analysis reveals that the spatial variability of language is determined to a greater extent locally by the socioeconomic status; (c) In terms of temporal activity, standard language was more likely to be used during the daytime while non-standard variants were predominant during the night. We explained this temporal variability by the turnover of population with different socioeconomic status active during night and day; Finally (d) we showed that the social network and status homophily mattered in terms of linguistic similarity between peers, as connected users with the same socioeconomic status appeared to be the most similar, while disconnected people were found to be the most dissimilar in terms of their individual use of the aforementioned linguistic markers.",
"Despite these findings, one has to acknowledge the multiple limitations affecting this work: First of all, although Twitter is a broadly adopted service in most technologically enabled societies, it commonly provides a biased sample in terms of age and socioeconomic status as older or poorer people may not have access to this technology. In addition, home locations inferred for lower activity users may induced some noise in our inference method. Nevertheless, we demonstrated that our selected Twitter users are quite representative in terms of spatial, temporal, and socioeconomic distributions once compared to census data. Other sources of bias include the \"homogenization\" performed by INSEE to ensure privacy rights are upheld as well as the proxies we devised to approximate users' home location and social network. Currently, a sample survey of our set of geolocated users is being conducted so as to bootstrap socioeconomic data to users and definitely validate our inference results. Nonetheless, this INSEE dataset provides still the most comprehensive available information on socioeconomic status over the whole country. For limiting such risk of bias, we analyzed the potential effect of the confounding variables on distribution and cross-correlations of SES indicators. Acknowledging possible limitations of this study, we consider it as a necessary first step in analyzing income through social media using datasets orders of magnitude larger than in previous research efforts.",
"Finally we would like to emphasize two scientific merits of the paper. On one side, based on a very large sample, we confirm and clarify results from the field of sociolinguistics and we highlight new findings. We thus confirm clear correlations between the variable realization of the negative particle in French and three indices of socioeconomic status. This result challenges those among the sociolinguistic studies that do not find such correlation. Our data also suggested that the language used in the southern part of France is more standard. Understanding this pattern fosters further investigations within sociolinguistics. We finally established that the linguistic similarity of socially connected people is partially explained by status homophily but could be potentially induced by social influences passing through the network of links or other terms of homophilic correlations. Beyond scientific merit, we can identify various straightforward applications of our results. The precise inference of socioeconomic status of individuals from online activities is for instance still an open question, which carries a huge potential in marketing design and other areas. Our results may be useful moving forward in this direction by using linguistic information, available on Twitter and other online platforms, to infer socioeconomic status of individuals from their position in the network as well as the way they use their language."
],
[
"Status homophily in social networks appears as an increased tendency for people from similar socioeconomic classes to be connected. This correlation can be identified by comparing likelihood of connectedness in the empirical network to a random network, which conserves all network properties except structural correlations. To do so, we took each $(s_i,s_j)$ pair of the nine SES class in the Twitter network and counted the number of links $|E(s_i, s_j)|$ connecting people in classes $s_i$ and $s_j$ . As a reference system, we computed averages over 100 corresponding configuration model network structures BIBREF45 . To signalize the effects of status homophily, we took the ratio $|E(s_i, s_j)|/|E_{rand}(s_i, s_j)|$ of the two matrices (shown in Fig. 6 b). The diagonal component in Fig. 6 b with values larger than 1 showed that users of the same or similar socioeconomic class were better connected in the original structure than by chance, while the contrary was true for users from classes far apart (see blue off-diagonal components). To verify the statistical significance of this finding, we performed a $\\chi ^2$ -test, which showed that the distribution of links in the original matrix was significantly different from the one of the average randomized matrix ( $p<10^{-5}$ ). This observation verified status homophily present in the Twitter mention network."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data Description",
"Twitter dataset: sociolinguistic features",
"INSEE dataset: socioeconomic features",
"Combined dataset: individual socioeconomic features",
"Linguistic variables",
"Standard usage of negation",
"Standard usage of plural ending of written words",
"Normalized vocabulary set size",
"Results",
"Socioeconomic variation",
"Spatial variation",
"Temporal variation",
"Network variation",
"Conclusions",
"Appendix: Status homophily"
]
} | {
"answers": [
{
"annotation_id": [
"82280a6c4499dd060fa55df5b195d0546a2c8aa0",
"c0474a3de31587005289eabeb3a5b32925a5ea3e"
],
"answer": [
{
"evidence": [
"To overcome this limitation we combined our Twitter data with the socioeconomic maps of INSEE by assigning each geolocated Twitter user to a patch closest to their estimated home location (within 1 km). This way we obtained for all $110,369$ geolocated users their dynamical linguistic data, their egocentric social network as well as a set of SES indicators."
],
"extractive_spans": [],
"free_form_answer": "Match geolocation data for Twitter users with patches from INSEE socioeconomic maps.",
"highlighted_evidence": [
"To overcome this limitation we combined our Twitter data with the socioeconomic maps of INSEE by assigning each geolocated Twitter user to a patch closest to their estimated home location (within 1 km). This way we obtained for all $110,369$ geolocated users their dynamical linguistic data, their egocentric social network as well as a set of SES indicators."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To obtain a unique representative location of each user, we extracted the sequence of all declared locations from their geolocated tweets. Using this set of locations we selected the most frequent to be the representative one, and we took it as a proxy for the user's home location. Further we limited our users to ones located throughout the French territory thus not considering others tweeting from places outside the country. This selection method provided us with $110,369$ geolocated users who are either detected as French speakers or assigned to be such by Twitter and all associated to specific 'home' GPS coordinates in France. To verify the spatial distribution of the selected population, we further assessed the correlations between the true population distributions (obtained from census data BIBREF28 ) at different administrative level and the geolocated user distribution aggregated correspondingly. More precisely, we computed the $R^2$ coefficient of variation between the inferred and official population distributions (a) at the level of 22 regions. Correlations at this level induced a high coefficient of $R^2\\simeq 0.89$ ( $p<10^{-2}$ ); (b) At the arrondissement level with 322 administrative units and coefficient $R^2\\simeq 0.87$ ( $p<10^{-2}$ ); and (c) at the canton level with 4055 units with a coefficient $R\\simeq 0.16$ ( $p<10^{-2}$ ). Note that the relatively small coefficient at this level is due to the interplay of the sparsity of the inferred data and the fine grained spatial resolution of cantons. All in all, we can conclude that our sample is highly representative in terms of spatial population distribution, which at the same time validate our selection method despite the potential inherent biases induced by the method taking the most frequented GPS coordinates as the user's home location.",
"The second dataset we used was released in December 2016 by the National Institute of Statistics and Economic Studies (INSEE) of France. This data corpus BIBREF29 contains a set of sociodemographic aggregated indicators, estimated from the 2010 tax return in France, for each 4 hectare ( $200m \\times 200m$ ) square patch across the whole French territory. Using these indicators, one can estimate the distribution of the average socioeconomic status (SES) of people with high spatial resolution. In this study, we concentrated on three indicators for each patch $i$ , which we took to be good proxies of the socioeconomic status of the people living within them. These were the $S^i_\\mathrm {inc}$ average yearly income per capita (in euros), the $S^i_{\\mathrm {own}}$ fraction of owners (not renters) of real estate, and the $S^i_\\mathrm {den}$ density of population defined respectively as"
],
"extractive_spans": [],
"free_form_answer": "By matching users to locations using geolocated tweets data, then matching locations to socioeconomic status using INSEE sociodemographic data.",
"highlighted_evidence": [
"To obtain a unique representative location of each user, we extracted the sequence of all declared locations from their geolocated tweets. Using this set of locations we selected the most frequent to be the representative one, and we took it as a proxy for the user's home location.",
"The second dataset we used was released in December 2016 by the National Institute of Statistics and Economic Studies (INSEE) of France. This data corpus BIBREF29 contains a set of sociodemographic aggregated indicators, estimated from the 2010 tax return in France, for each 4 hectare ( $200m \\times 200m$ ) square patch across the whole French territory."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"27584b8b6d0de6575f8292c79595a74f737e8170",
"4c93c6a7cdc64948201e9dc780468022f6a373bf"
],
"answer": [
{
"evidence": [
"In Fig. 4 a and b we show the temporal variability of $\\overline{L}^{\\Lambda }_{\\mathrm {cn}}(t)$ and $\\overline{L}^{\\Lambda }_{\\mathrm {cp}}(t)$ (respectively) computed for the whole Twitter user set ( $\\Gamma =all$ , solid line) and for geolocated users ( $\\Gamma =geo$ , dashed lines). Not surprisingly, these two curves were strongly correlated as indicated by the high Pearson correlation coefficients summarized in the last column of Table 3 which, again, assured us that our geolocated sample of Twitter users was representative of the whole set of users. At the same time, the temporal variability of these curves suggested that people tweeting during the day used a more standard language than those users who are more active during the night. However, after measuring the average income of active users in a given hour over a week, we obtained an even more sophisticated picture. It turned out that people active during the day have higher average income (warmer colors in Fig. 4 ) than people active during the night (colder colors in Fig. 4 ). Thus the variability of standard language patterns was largely explained by the changing overall composition of active Twitter users during different times of day and the positive correlation between socioeconomic status and the usage of higher linguistic standards (that we have seen earlier). This explanation was supported by the high coefficients (summarized in Table 3 ), which were indicative of strong and significant correlations between the temporal variability of average linguistic variables and average income of the active population on Twitter."
],
"extractive_spans": [],
"free_form_answer": "No, but the authors identified a correlation.",
"highlighted_evidence": [
"It turned out that people active during the day have higher average income (warmer colors in Fig. 4 ) than people active during the night (colder colors in Fig. 4 ). Thus the variability of standard language patterns was largely explained by the changing overall composition of active Twitter users during different times of day and the positive correlation between socioeconomic status and the usage of higher linguistic standards (that we have seen earlier). This explanation was supported by the high coefficients (summarized in Table 3 ), which were indicative of strong and significant correlations between the temporal variability of average linguistic variables and average income of the active population on Twitter."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To do so, first we took the geolocated Twitter users in France and partitioned them into nine socioeconomic classes using their inferred income $S_\\mathrm {inc}^u$ . Partitioning was done first by sorting users by their $S^u_\\mathrm {inc}$ income to calculate their $C(S^u_\\mathrm {inc})$ cumulative income distribution function. We defined socioeconomic classes by segmenting $C(S^u_\\mathrm {inc})$ such that the sum of income is the same for each classes (for an illustration of our method see Fig. 6 a in the Appendix). We constructed a social network by considering mutual mention links between these users (as introduced in Section \"Data Description\" ). Taking the assigned socioeconomic classes of connected individuals, we confirmed the effects of status homophily in the Twitter mention network by computing the connection matrix of socioeconomic groups normalized by the equivalent matrix of corresponding configuration model networks, which conserved all network properties except structural correlations (as explained in the Appendix). The diagonal component in Fig. 6 matrix indicated that users of similar socioeconomic classes were better connected, while people from classes far apart were less connected than one would expect by chance from the reference model with users connected randomly."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To do so, first we took the geolocated Twitter users in France and partitioned them into nine socioeconomic classes using their inferred income $S_\\mathrm {inc}^u$ . Partitioning was done first by sorting users by their $S^u_\\mathrm {inc}$ income to calculate their $C(S^u_\\mathrm {inc})$ cumulative income distribution function. We defined socioeconomic classes by segmenting $C(S^u_\\mathrm {inc})$ such that the sum of income is the same for each classes (for an illustration of our method see Fig. 6 a in the Appendix)."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"197290cb509b9a046b311719c6ce1ce408f3be8a"
]
},
{
"annotation_id": [
"05757ba43c646a83ad6a85fec61da6945095c14f",
"1047c532aca7c4c2db111c12553ba47ef25256d0"
],
"answer": [
{
"evidence": [
"The basic form of negation in French includes two negative particles: ne (no) before the verb and another particle after the verb that conveys more accurate meaning: pas (not), jamais (never), personne (no one), rien (nothing), etc. Due to this double construction, the first part of the negation (ne) is optional in spoken French, but it is obligatory in standard writing. Sociolinguistic studies have previously observed the realization of ne in corpora of recorded everyday spoken interactions. Although all the studies do not converge, a general trend is that ne realization is more frequent in speakers with higher socioeconomic status than in speakers with lower status BIBREF30 , BIBREF31 . We built upon this research to set out to detect both negation variants in the tweets using regular expressions. We are namely interested in the rate of usage of the standard negation (featuring both negative particles) across users:",
"In written French, adjectives and nouns are marked as being plural by generally adding the letters s or x at the end of the word. Because these endings are mute (without counterpart in spoken French), their omission is the most frequent spelling error in adults BIBREF32 . Moreover, studies showed correlations between standard spelling and social status of the writers, in preteens, teens and adults BIBREF33 , BIBREF32 , BIBREF34 . We then set to estimate the use of standard plural across users:"
],
"extractive_spans": [],
"free_form_answer": "Use of both French negative particles and spelling out plural ending on adjectives and nouns",
"highlighted_evidence": [
"The basic form of negation in French includes two negative particles: ne (no) before the verb and another particle after the verb that conveys more accurate meaning: pas (not), jamais (never), personne (no one), rien (nothing), etc. Due to this double construction, the first part of the negation (ne) is optional in spoken French, but it is obligatory in standard writing.",
"In written French, adjectives and nouns are marked as being plural by generally adding the letters s or x at the end of the word."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We identified the following three linguistic markers to study across users from different socioeconomic backgrounds: Correlation with SES has been evidenced for all of them. The optional deletion of negation is typical of spoken French, whereas the omission of the mute letters marking the plural in the nominal phrase is a variable cue of French writing. The third linguistic variable is a global measure of the lexical diversity of the Twitter users. We present them here in greater detail."
],
"extractive_spans": [
"Standard usage of negation",
"Standard usage of plural ending of written words",
"lexical diversity"
],
"free_form_answer": "",
"highlighted_evidence": [
"We identified the following three linguistic markers to study across users from different socioeconomic backgrounds: Correlation with SES has been evidenced for all of them. The optional deletion of negation is typical of spoken French, whereas the omission of the mute letters marking the plural in the nominal phrase is a variable cue of French writing. The third linguistic variable is a global measure of the lexical diversity of the Twitter users."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
},
{
"annotation_id": [
"27b2811a3ba7e8c4a9c2a499e79effa02fc7dada",
"dbd2e3de112d7f5769f01bde1216cbfba9681fad"
],
"answer": [
{
"evidence": [
"To obtain a unique representative location of each user, we extracted the sequence of all declared locations from their geolocated tweets. Using this set of locations we selected the most frequent to be the representative one, and we took it as a proxy for the user's home location. Further we limited our users to ones located throughout the French territory thus not considering others tweeting from places outside the country. This selection method provided us with $110,369$ geolocated users who are either detected as French speakers or assigned to be such by Twitter and all associated to specific 'home' GPS coordinates in France. To verify the spatial distribution of the selected population, we further assessed the correlations between the true population distributions (obtained from census data BIBREF28 ) at different administrative level and the geolocated user distribution aggregated correspondingly. More precisely, we computed the $R^2$ coefficient of variation between the inferred and official population distributions (a) at the level of 22 regions. Correlations at this level induced a high coefficient of $R^2\\simeq 0.89$ ( $p<10^{-2}$ ); (b) At the arrondissement level with 322 administrative units and coefficient $R^2\\simeq 0.87$ ( $p<10^{-2}$ ); and (c) at the canton level with 4055 units with a coefficient $R\\simeq 0.16$ ( $p<10^{-2}$ ). Note that the relatively small coefficient at this level is due to the interplay of the sparsity of the inferred data and the fine grained spatial resolution of cantons. All in all, we can conclude that our sample is highly representative in terms of spatial population distribution, which at the same time validate our selection method despite the potential inherent biases induced by the method taking the most frequented GPS coordinates as the user's home location.",
"The second dataset we used was released in December 2016 by the National Institute of Statistics and Economic Studies (INSEE) of France. This data corpus BIBREF29 contains a set of sociodemographic aggregated indicators, estimated from the 2010 tax return in France, for each 4 hectare ( $200m \\times 200m$ ) square patch across the whole French territory. Using these indicators, one can estimate the distribution of the average socioeconomic status (SES) of people with high spatial resolution. In this study, we concentrated on three indicators for each patch $i$ , which we took to be good proxies of the socioeconomic status of the people living within them. These were the $S^i_\\mathrm {inc}$ average yearly income per capita (in euros), the $S^i_{\\mathrm {own}}$ fraction of owners (not renters) of real estate, and the $S^i_\\mathrm {den}$ density of population defined respectively as",
"To overcome this limitation we combined our Twitter data with the socioeconomic maps of INSEE by assigning each geolocated Twitter user to a patch closest to their estimated home location (within 1 km). This way we obtained for all $110,369$ geolocated users their dynamical linguistic data, their egocentric social network as well as a set of SES indicators."
],
"extractive_spans": [
"we combined our Twitter data with the socioeconomic maps of INSEE by assigning each geolocated Twitter user to a patch closest to their estimated home location"
],
"free_form_answer": "",
"highlighted_evidence": [
"To obtain a unique representative location of each user, we extracted the sequence of all declared locations from their geolocated tweets. Using this set of locations we selected the most frequent to be the representative one, and we took it as a proxy for the user's home location.",
"The second dataset we used was released in December 2016 by the National Institute of Statistics and Economic Studies (INSEE) of France. This data corpus BIBREF29 contains a set of sociodemographic aggregated indicators, estimated from the 2010 tax return in France, for each 4 hectare ( $200m \\times 200m$ ) square patch across the whole French territory.",
"To overcome this limitation we combined our Twitter data with the socioeconomic maps of INSEE by assigning each geolocated Twitter user to a patch closest to their estimated home location (within 1 km). This way we obtained for all $110,369$ geolocated users their dynamical linguistic data, their egocentric social network as well as a set of SES indicators."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"197290cb509b9a046b311719c6ce1ce408f3be8a",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"yes",
"yes",
"yes",
"yes"
],
"question": [
"How do they combine the socioeconomic maps with Twitter data? ",
"Does the fact that people are active during the day time define their SEC?",
"How did they define standard language?",
"How do they operationalize socioeconomic status from twitter user data?"
],
"question_id": [
"d4b84f48460517bc0a6d4e0c38f6853c58081166",
"90756bdcd812b7ecc1c5df2298aa7561fd2eb02c",
"028d0d9b7a71133e51a14a32cd09dea1e2f39f05",
"cfc73e0c82cf1630b923681c450a541a964688b9"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"sociolinguistics ",
"sociolinguistics ",
"sociolinguistics ",
"sociolinguistics "
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Distributions and correlations of socioeconomic indicators. (a) Spatial distribution of average income in France with 200m × 200m resolution. (b) Distribution of socioeconomic indicators (in the diag.) and their pairwise correlations measured in the INSEE (upper diag. panels) and Twitter geotagged (lower diag. panels) datasets. Contour plots assign the equidensity lines of the scatter plots, while solid lines are the corresponding linear regression values. Population density in log.",
"Table 1: Pearson correlations and p-values measured between SES indicators in the INSEE and Twitter datasets.",
"Figure 2: Pairwise correlations between three SES indicators and three linguisticmarkers. Columns correspond to SES indicators (resp. Siinc, S i own, Siden), while rows correspond to linguistic variables (resp. Lcn, Lcp and Lvs). On each plot colored symbols are binned data values and a linear regression curve are shown together with the 95 percentile confidence interval and R2 values.",
"Table 2: The R2 coefficient of determination and the corresponding p-values computed for the pairwise correlations of SES indicators and linguistic variables.",
"Figure 4: Temporal variability of (a) LΛcn(t) (resp. (b) L Λ cp(t)) average rate of correct negation (resp. plural terms) over a week with one hour resolution. Rates were computed for Λ = all (solid line) and Λ = дeolocated Twitter users. Colors indicates the temporal variability of the average income of geolocated population active in a given hour.",
"Figure 3: Geographical variability of linguistic markers in France. (a) Variability of the rate of correct negation. Inset focuses on larger Paris. (b) Variability of the rate of correct plural terms. (c) Variability of the average vocabulary size set. Each plot depicts variability on the department level except the inset of (a) which is on the \"arrondissements\" level.",
"Figure 5: Distribution of the |Lu∗ − Lv∗ | absolute difference of linguistic variables ∗ ∈ {cn, cp,vs} (resp. panels (a), (b), and (c)) of user pairs who were connected and from the same socioeconomic group (red), connected (yellow), disconnected and from the same socioeconomic group (light blue), disconnected pairs of randomly selected users (blue).",
"Table 3: Pearson correlations and p-values of pairwise correlations of time varying Sinc(t) average income with L Λ cn(t) and LΛcp(t) average linguistic variables; and between average linguistic variables of Λ = all and Λ = geo-localized users.",
"Figure 6: (a) Definition of socioeconomic classes by partitioning users into nine groups with the same cumulative annual income. (b) Structural correlations between SES groups depicted as matrix of the ratio |E(si , sj )|/|Erand (si , sj )| between the original and the average randomized mention network"
],
"file": [
"4-Figure1-1.png",
"5-Table1-1.png",
"6-Figure2-1.png",
"6-Table2-1.png",
"7-Figure4-1.png",
"7-Figure3-1.png",
"8-Figure5-1.png",
"8-Table3-1.png",
"9-Figure6-1.png"
]
} | [
"How do they combine the socioeconomic maps with Twitter data? ",
"Does the fact that people are active during the day time define their SEC?",
"How did they define standard language?"
] | [
[
"1804.01155-Combined dataset: individual socioeconomic features-1",
"1804.01155-Twitter dataset: sociolinguistic features-5"
],
[
"1804.01155-Temporal variation-3",
"1804.01155-Network variation-1"
],
[
"1804.01155-Linguistic variables-0"
]
] | [
"By matching users to locations using geolocated tweets data, then matching locations to socioeconomic status using INSEE sociodemographic data.",
"No, but the authors identified a correlation.",
"Use of both French negative particles and spelling out plural ending on adjectives and nouns"
] | 246 |
1805.06648 | Extrapolation in NLP | We argue that extrapolation to examples outside the training space will often be easier for models that capture global structures, rather than just maximise their local fit to the training data. We show that this is true for two popular models: the Decomposable Attention Model and word2vec. | {
"paragraphs": [
[
"In a controversial essay, BIBREF0 draws the distinction between two types of generalisation: interpolation and extrapolation; with the former being predictions made between the training data points, and the latter being generalisation outside this space. He goes on to claim that deep learning is only effective at interpolation, but that human like learning and behaviour requires extrapolation.",
"On Twitter, Thomas Diettrich rebutted this claim with the response that no methods extrapolate; that what appears to be extrapolation from X to Y is interpolation in a representation that makes X and Y look the same. ",
"It is certainly true that extrapolation is hard, but there appear to be clear real-world examples. For example, in 1705, using Newton's then new inverse square law of gravity, Halley predicted the return of a comet 75 years in the future. This prediction was not only possible for a new celestial object for which only a limited amount of data was available, but was also effective on an orbital period twice as long as any of those known to Newton. Pre-Newtonian models required a set of parameters (deferents, epicycles, equants, etc.) for each body and so would struggle to generalise from known objects to new ones. Newton's theory of gravity, in contrast, not only described celestial orbits but also predicted the motion of bodies thrown or dropped on Earth.",
"In fact, most scientists would regard this sort of extrapolation to new phenomena as a vital test of any theory's legitimacy. Thus, the question of what is required for extrapolation is reasonably important for the development of NLP and deep learning.",
" BIBREF0 proposes an experiment, consisting of learning the identity function for binary numbers, where the training set contains only the even integers but at test time the model is required to generalise to odd numbers. A standard multilayer perceptron (MLP) applied to this data fails to learn anything about the least significant bit in input and output, as it is constant throughout the training set, and therefore fails to generalise to the test set. Many readers of the article ridiculed the task and questioned its relevance. Here, we will argue that it is surprisingly easy to solve Marcus' even-odd task and that the problem it illustrates is actually endemic throughout machine learning.",
" BIBREF0 links his experiment to the systematic ways in which the meaning and use of a word in one context is related to its meaning and use in another BIBREF1 , BIBREF2 . These regularities allow us to extrapolate from sometimes even a single use of a word to understand all of its other uses.",
"In fact, we can often use a symbol effectively with no prior data. For example, a language user that has never have encountered the symbol Socrates before may nonetheless be able to leverage their syntactic, semantic and inferential skills to conclude that Socrates is mortal contradicts Socrates is not mortal.",
"Marcus' experiment essentially requires extrapolating what has been learned about one set of symbols to a new symbol in a systematic way. However, this transfer is not facilitated by the techniques usually associated with improving generalisation, such as L2-regularisation BIBREF3 , drop-out BIBREF4 or preferring flatter optima BIBREF5 .",
"In the next section, we present four ways to solve this problem and discuss the role of global symmetry in effective extrapolation to the unseen digit. Following that we present practical examples of global structure in the representation of sentences and words. Global, in these examples, means a model form that introduces dependencies between distant regions of the input space."
],
[
"The problem is described concretely by BIBREF6 , with inputs and outputs both consisting of five units representing the binary digits of the integers zero to thirty one. The training data consists of the binary digits of the even numbers INLINEFORM0 and the test set consists of the odd numbers INLINEFORM1 . The task is to learn the identity function from the training data in a way that generalises to the test set.",
"The first model (slp) we consider is a simple linear single layer perceptron from input to output.",
"In the second model (flip), we employ a change of representation. Although the inputs and outputs are given and fixed in terms of the binary digits 1 and 0, we will treat these as symbols and exploit the freedom to encode these into numeric values in the most effective way for the task. Specifically, we will represent the digit 1 with the number 0 and the digit 0 with the number 1. Again, the network will be a linear single layer perceptron without biases.",
"Returning to the original common-sense representation, 1 INLINEFORM0 1 and 0 INLINEFORM1 0, the third model (ortho) attempts to improve generalisation by imposing a global condition on the matrix of weights in the linear weights. In particular, we require that the matrix is orthogonal, and apply the absolute value function at the output to ensure the outputs are not negative.",
"For the fourth model (conv), we use a linear Convolutional Neural Network (ConvNet, BIBREF7 ) with a filter of width five. In other words, the network weights define a single linear function that is shifted across the inputs for each output position.",
"Finally, in our fifth model (proj) we employ another change of representation, this time a dimensionality reduction technique. Specifically, we project the 5-dimensional binary digits INLINEFORM0 onto an INLINEFORM1 dimensional vector INLINEFORM2 and carry out the learning using an INLINEFORM3 -to- INLINEFORM4 layer in this smaller space. DISPLAYFORM0 ",
"where the entries of the matrix INLINEFORM0 are INLINEFORM1 . In each case, our loss and test evaluation is based on squared error between target and predicted outputs."
],
[
"The Stanford Natural Language Inference (SNLI, BIBREF10 ) dataset attempts to provide training and evaluation data for the task of categorising the logical relationship between a pair of sentences. Systems must identify whether each hypothesis stands in a relation of entailment, contradiction or neutral to its corresponding premise. A number of neural net architectures have been proposed that effectively learn to make test set predictions based purely on patterns learned from the training data, without additional knowledge of the real world or of the logical structure of the task.",
"Here, we evaluate the Decomposable Attention Model (DAM, BIBREF11 ) in terms of its ability to extrapolate to novel instances, consisting of contradictions from the original test set which have been reversed. For a human that understands the task, such generalisation is obvious: knowing that A contradicts B is equivalent to knowing that B contradicts A. However, it is not at all clear that a model will learn this symmetry from the SNLI data, without it being imposed on the model in some way. Consequently we also evaluate a modification, S-DAM, where this constraint is enforced by design."
],
[
"Word embeddings, such as GloVe BIBREF12 and word2vec BIBREF13 , have been enormously effective as input representations for downstream tasks such as question answering or natural language inference. One well known application is the INLINEFORM0 example, which represents an impressive extrapolation from word co-occurrence statistics to linguistic analogies BIBREF14 . To some extent, we can see this prediction as exploiting a global structure in which the differences between analogical pairs, such as INLINEFORM1 , INLINEFORM2 and INLINEFORM3 , are approximately equal.",
"Here, we consider how this global structure in the learned embeddings is related to a linearity in the training objective. In particular, linear functions have the property that INLINEFORM0 , imposing a systematic relation between the predictions we make for INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . In fact, we could think of this as a form of translational symmetry where adding INLINEFORM4 to the input has the same effect on the output throughout the space.",
"We hypothesise that breaking this linearity, and allowing a more local fit to the training data will undermine the global structure that the analogy predictions exploit."
],
[
"Language is a very complex phenomenon, and many of its quirks and idioms need to be treated as local phenomena. However, we have also shown here examples in the representation of words and sentences where global structure supports extrapolation outside the training data.",
"One tool for thinking about this dichotomy is the equivalent kernel BIBREF15 , which measures the extent to which a given prediction is influenced by nearby training examples. Typically, models with highly local equivalent kernels - e.g. splines, sigmoids and random forests - are preferred over non-local models - e.g. polynomials - in the context of general curve fitting BIBREF16 .",
"However, these latter functions are also typically those used to express fundamental scientific laws - e.g. INLINEFORM0 , INLINEFORM1 - which frequently support extrapolation outside the original data from which they were derived. Local models, by their very nature, are less suited to making predictions outside the training manifold, as the influence of those training instances attenuates quickly.",
"We suggest that NLP will benefit from incorporating more global structure into its models. Existing background knowledge is one possible source for such additional structure BIBREF17 , BIBREF18 . But it will also be necessary to uncover novel global relations, following the example of the other natural sciences.",
"We have used the development of the scientific understanding of planetary motion as a repeated example of the possibility of uncovering global structures that support extrapolation, throughout our discussion. Kepler and Newton found laws that went beyond simply maximising the fit to the known set of planetary bodies to describe regularities that held for every body, terrestrial and heavenly.",
"In our SNLI example, we showed that simply maximising the fit on the development and test sets does not yield a model that extrapolates to reversed contradictions. In the case of word2vec, we showed that performance on the analogy task was related to the linearity in the objective function.",
"More generally, we want to draw attention to the need for models in NLP that make meaningful predictions outside the space of the training data, and to argue that such extrapolation requires distinct modelling techniques from interpolation within the training space. Specifically, whereas the latter can often effectively rely on local smoothing between training instances, the former may require models that exploit global structures of the language phenomena."
],
[
"The authors are immensely grateful to Ivan Sanchez Carmona for many fruitful disagreements. This work has been supported by the European Union H2020 project SUMMA (grant No. 688139), and by an Allen Distinguished Investigator Award."
]
],
"section_name": [
"Introduction",
"Four Ways to Learn the Identity Function",
"Global Symmetries in Natural Language Inference",
"Global Structure in Word Embeddings",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"059a0fe9d117c4437b9d2ea4ce095f5d185236ea",
"70024fe7fd4bbf2c2d608ba7295f34e8b8d0fcac"
],
"answer": [
{
"evidence": [
"We hypothesise that breaking this linearity, and allowing a more local fit to the training data will undermine the global structure that the analogy predictions exploit."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Models sections) 100, 200 and 400",
"highlighted_evidence": [
"We hypothesise that breaking this linearity, and allowing a more local fit to the training data will undermine the global structure that the analogy predictions exploit."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Accuracy on the analogy task."
],
"extractive_spans": [],
"free_form_answer": "100, 200, 400",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Accuracy on the analogy task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"18fcc7885b7d829288b552ea06182e63db2a18d9",
"d82500a1d46e06d094da243c4cb74b2cf5c6d3f6"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Here, we consider how this global structure in the learned embeddings is related to a linearity in the training objective. In particular, linear functions have the property that INLINEFORM0 , imposing a systematic relation between the predictions we make for INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . In fact, we could think of this as a form of translational symmetry where adding INLINEFORM4 to the input has the same effect on the output throughout the space."
],
"extractive_spans": [
"global structure in the learned embeddings is related to a linearity in the training objective"
],
"free_form_answer": "",
"highlighted_evidence": [
"Here, we consider how this global structure in the learned embeddings is related to a linearity in the training objective. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What dimensions do the considered embeddings have?",
"How are global structures considered?"
],
"question_id": [
"143409d16125790c8db9ed38590a0796e0b2b2e2",
"8ba582939823faae6822a27448ea011ab6b90ed7"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Generalising to unseen data: dotted line = training manifold; black arrows = interpolation; grey arrows = extrapolation. Both directions are represented globally in the training data, but local interpolation is only effective in one of them at each point.",
"Table 1: Mean Squared Error on the Train (even numbers) and Test (odd numbers) Sets.",
"Table 2: Accuracy on all instances, contradictions and reversed contradictions from the SNLI test set.",
"Table 3: Accuracy on the analogy task."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} | [
"What dimensions do the considered embeddings have?"
] | [
[
"1805.06648-5-Table3-1.png",
"1805.06648-Global Structure in Word Embeddings-2"
]
] | [
"100, 200, 400"
] | 248 |
1910.01160 | Identifying Nuances in Fake News vs. Satire: Using Semantic and Linguistic Cues | The blurry line between nefarious fake news and protected-speech satire has been a notorious struggle for social media platforms. Further to the efforts of reducing exposure to misinformation on social media, purveyors of fake news have begun to masquerade as satire sites to avoid being demoted. In this work, we address the challenge of automatically classifying fake news versus satire. Previous work have studied whether fake news and satire can be distinguished based on language differences. Contrary to fake news, satire stories are usually humorous and carry some political or social message. We hypothesize that these nuances could be identified using semantic and linguistic cues. Consequently, we train a machine learning method using semantic representation, with a state-of-the-art contextual language model, and with linguistic features based on textual coherence metrics. Empirical evaluation attests to the merits of our approach compared to the language-based baseline and sheds light on the nuances between fake news and satire. As avenues for future work, we consider studying additional linguistic features related to the humor aspect, and enriching the data with current news events, to help identify a political or social message. | {
"paragraphs": [
[
"The efforts by social media platforms to reduce the exposure of users to misinformation have resulted, on several occasions, in flagging legitimate satire stories. To avoid penalizing publishers of satire, which is a protected form of speech, the platforms have begun to add more nuance to their flagging systems. Facebook, for instance, added an option to mark content items as “Satire”, if “the content is posted by a page or domain that is a known satire publication, or a reasonable person would understand the content to be irony or humor with a social message” BIBREF0. This notion of humor and social message is also echoed in the definition of satire by Oxford dictionary as “the use of humour, irony, exaggeration, or ridicule to expose and criticize people's stupidity or vices, particularly in the context of contemporary politics and other topical issues”.",
"The distinction between fake news and satire carries implications with regard to the exposure of content on social media platforms. While fake news stories are algorithmically suppressed in the news feed, the satire label does not decrease the reach of such posts. This also has an effect on the experience of users and publishers. For users, incorrectly classifying satire as fake news may deprive them from desirable entertainment content, while identifying a fake news story as legitimate satire may expose them to misinformation. For publishers, the distribution of a story has an impact on their ability to monetize content.",
"Moreover, in response to these efforts to demote misinformation, fake news purveyors have begun to masquerade as legitimate satire sites, for instance, carrying small badges at the footer of each page denoting the content as satire BIBREF1. The disclaimers are usually small such that the stories are still being spread as though they were real news BIBREF2.",
"This gives rise to the challenge of classifying fake news versus satire based on the content of a story. While previous work BIBREF1 have shown that satire and fake news can be distinguished with a word-based classification approach, our work is focused on the semantic and linguistic properties of the content. Inspired by the distinctive aspects of satire with regard to humor and social message, our hypothesis is that using semantic and linguistic cues can help to capture these nuances.",
"Our main research questions are therefore, RQ1) are there semantic and linguistic differences between fake news and satire stories that can help to tell them apart?; and RQ2) can these semantic and linguistic differences contribute to the understanding of nuances between fake news and satire beyond differences in the language being used?",
"The rest of paper is organized as follows: in section SECREF2, we briefly review studies on fake news and satire articles which are the most relevant to our work. In section SECREF3, we present the methods we use to investigate semantic and linguistic differences between fake and satire articles. Next, we evaluate these methods and share insights on nuances between fake news and satire in section SECREF4. Finally, we conclude the paper in section SECREF5 and outline next steps and future work."
],
[
"Previous work addressed the challenge of identifying fake news BIBREF3, BIBREF4, or identifying satire BIBREF5, BIBREF6, BIBREF7, in isolation, compared to real news stories.",
"The most relevant work to ours is that of Golbeck et al. BIBREF1. They introduced a dataset of fake news and satirical articles, which we also employ in this work. The dataset includes the full text of 283 fake news stories and 203 satirical stories, that were verified manually, and such that each fake news article is paired with a rebutting article from a reliable source. Albeit relatively small, this data carries two desirable properties. First, the labeling is based on the content and not the source, and the stories spread across a diverse set of sources. Second, both fake news and satire articles focus on American politics and were posted between January 2016 and October 2017, minimizing the possibility that the topic of the article will influence the classification.",
"In their work, Golbeck et al. studied whether there are differences in the language of fake news and satirical articles on the same topic that could be utilized with a word-based classification approach. A model using the Naive Bayes Multinomial algorithm is proposed in their paper which serves as the baseline in our experiments."
],
[
"In the following subsections, we investigate the semantic and linguistic differences of satire and fake news articles."
],
[
"To study the semantic nuances between fake news and satire, we use BERT BIBREF8, which stands for Bidirectional Encoder Representations from Transformers, and represents a state-of-the-art contextual language model. BERT is a method for pre-training language representations, meaning that it is pre-trained on a large text corpus and then used for downstream NLP tasks. Word2Vec BIBREF9 showed that we can use vectors to properly represent words in a way that captures semantic or meaning-related relationships. While Word2Vec is a context-free model that generates a single word-embedding for each word in the vocabulary, BERT generates a representation of each word that is based on the other words in the sentence. It was built upon recent work in pre-training contextual representations, such as ELMo BIBREF10 and ULMFit BIBREF11, and is deeply bidirectional, representing each word using both its left and right context. We use the pre-trained models of BERT and fine-tune it on the dataset of fake news and satire articles using Adam optimizer with 3 types of decay and 0.01 decay rate. Our BERT-based binary classifier is created by adding a single new layer in BERT's neural network architecture that will be trained to fine-tune BERT to our task of classifying fake news and satire articles.",
""
],
[
"Inspired by previous work on satire detection, and specifically Rubin et al. BIBREF7 who studied the humor and absurdity aspects of satire by comparing the final sentence of a story to the first one, and to the rest of the story - we hypothesize that metrics of text coherence will be useful to capture similar aspects of semantic relatedness between different sentences of a story.",
"Consequently, we use the set of text coherence metrics as implemented by Coh-Metrix BIBREF12. Coh-Metrix is a tool for producing linguistic and discourse representations of a text. As a result of applying the Coh-Metrix to the input documents, we have 108 indices related to text statistics, such as the number of words and sentences; referential cohesion, which refers to overlap in content words between sentences; various text readability formulas; different types of connective words and more. To account for multicollinearity among the different features, we first run a Principal Component Analysis (PCA) on the set of Coh-Metrix indices. Note that we do not apply dimensionality reduction, such that the features still correspond to the Coh-Metrix indices and are thus explainable. Then, we use the PCA scores as independent variables in a logistic regression model with the fake and satire labels as our dependent variable. Significant features of the logistic regression model are shown in Table TABREF3 with the respective significance levels. We also run a step-wise backward elimination regression. Those components that are also significant in the step-wise model appear in bold."
],
[
"In the following sub sections, we evaluate our classification model and share insights on the nuances between fake news and satire, while addressing our two research questions."
],
[
"We evaluate the performance of our method based on the dataset of fake news and satire articles and using the F1 score with a ten-fold cross-validation as in the baseline work BIBREF1.",
"First, we consider the semantic representation with BERT. Our experiments included multiple pre-trained models of BERT with different sizes and cases sensitivity, among which the large uncased model, bert_uncased_L-24_H-1024_A-16, gave the best results. We use the recommended settings of hyper-parameters in BERT's Github repository and use the fake news and satire data to fine-tune the model. Furthermore, we tested separate models based on the headline and body text of a story, and in combination. Results are shown in Table TABREF6. The models based on the headline and text body give a similar F1 score. However, while the headline model performs poorly on precision, perhaps due to the short text, the model based on the text body performs poorly on recall. The model based on the full text of headline and body gives the best performance.",
"To investigate the predictive power of the linguistic cues, we use those Coh-Metrix indices that were significant in both the logistic and step-wise backward elimination regression models, and train a classifier on fake news and satire articles. We tested a few classification models, including Naive Bayes, Support Vector Machine (SVM), logistic regression, and gradient boosting - among which the SVM classifier gave the best results.",
"Table TABREF7 provides a summary of the results. We compare the results of our methods of the pre-trained BERT, using both the headline and text body, and the Coh-Mertix approach, to the language-based baseline with Multinomial Naive Bayes from BIBREF1. Both the semantic cues with BERT and the linguistic cues with Coh-Metrix significantly outperform the baseline on the F1 score. The two-tailed paired t-test with a 0.05 significance level was used for testing statistical significance of performance differences. The best result is given by the BERT model. Overall, these results provide an answer to research question RQ1 regarding the existence of semantic and linguistic difference between fake news and satire."
],
[
"With regard to research question RQ2 on the understanding of semantic and linguistic nuances between fake news and satire - a key advantage of studying the coherence metrics is explainability. While the pre-trained model of BERT gives the best result, it is not easily interpretable. The coherence metrics allow us to study the differences between fake news and satire in a straightforward manner.",
"Observing the significant features, in bold in Table TABREF3, we see a combination of surface level related features, such as sentence length and average word frequency, as well as semantic features including LSA (Latent Semantic Analysis) overlaps between verbs and between adjacent sentences. Semantic features which are associated with the gist representation of content are particularly interesting to see among the predictors since based on Fuzzy-trace theory BIBREF13, a well-known theory of decision making under risk, gist representation of content drives individual's decision to spread misinformation online. Also among the significant features, we observe the causal connectives, that are proven to be important in text comprehension, and two indices related to the text easability and readability, both suggesting that satire articles are more sophisticated, or less easy to read, than fake news articles."
],
[
"We addressed the challenge of identifying nuances between fake news and satire. Inspired by the humor and social message aspects of satire articles, we tested two classification approaches based on a state-of-the-art contextual language model, and linguistic features of textual coherence. Evaluation of our methods pointed to the existence of semantic and linguistic differences between fake news and satire. In particular, both methods achieved a significantly better performance than the baseline language-based method. Lastly, we studied the feature importance of our linguistic-based method to help shed light on the nuances between fake news and satire. For instance, we observed that satire articles are more sophisticated, or less easy to read, than fake news articles.",
"Overall, our contributions, with the improved classification accuracy and towards the understanding of nuances between fake news and satire, carry great implications with regard to the delicate balance of fighting misinformation while protecting free speech.",
"For future work, we plan to study additional linguistic cues, and specifically humor related features, such as absurdity and incongruity, which were shown to be good indicators of satire in previous work. Another interesting line of research would be to investigate techniques of identifying whether a story carries a political or social message, for example, by comparing it with timely news information."
]
],
"section_name": [
"Introduction",
"Related Work",
"Method",
"Method ::: Semantic Representation with BERT",
"Method ::: Linguistic Analysis with Coh-Metrix",
"Evaluation",
"Evaluation ::: Classification Between Fake News and Satire",
"Evaluation ::: Insights on Linguistic Nuances",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"05cde903fa56434a34b773b8fdb66e3ba01b1ce0",
"1d2a7acc3171f346984b41d746dbaf61de45445f"
],
"answer": [
{
"evidence": [
"We addressed the challenge of identifying nuances between fake news and satire. Inspired by the humor and social message aspects of satire articles, we tested two classification approaches based on a state-of-the-art contextual language model, and linguistic features of textual coherence. Evaluation of our methods pointed to the existence of semantic and linguistic differences between fake news and satire. In particular, both methods achieved a significantly better performance than the baseline language-based method. Lastly, we studied the feature importance of our linguistic-based method to help shed light on the nuances between fake news and satire. For instance, we observed that satire articles are more sophisticated, or less easy to read, than fake news articles."
],
"extractive_spans": [
"semantic and linguistic differences between",
" satire articles are more sophisticated, or less easy to read, than fake news articles"
],
"free_form_answer": "",
"highlighted_evidence": [
" Evaluation of our methods pointed to the existence of semantic and linguistic differences between fake news and satire.",
"For instance, we observed that satire articles are more sophisticated, or less easy to read, than fake news articles."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Observing the significant features, in bold in Table TABREF3, we see a combination of surface level related features, such as sentence length and average word frequency, as well as semantic features including LSA (Latent Semantic Analysis) overlaps between verbs and between adjacent sentences. Semantic features which are associated with the gist representation of content are particularly interesting to see among the predictors since based on Fuzzy-trace theory BIBREF13, a well-known theory of decision making under risk, gist representation of content drives individual's decision to spread misinformation online. Also among the significant features, we observe the causal connectives, that are proven to be important in text comprehension, and two indices related to the text easability and readability, both suggesting that satire articles are more sophisticated, or less easy to read, than fake news articles."
],
"extractive_spans": [
"satire articles are more sophisticated, or less easy to read, than fake news articles"
],
"free_form_answer": "",
"highlighted_evidence": [
"Also among the significant features, we observe the causal connectives, that are proven to be important in text comprehension, and two indices related to the text easability and readability, both suggesting that satire articles are more sophisticated, or less easy to read, than fake news articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"866316edd3b1113ffaa8ae94e72a0469bcd89fcd",
"ae163334b778bf738b2eb7476dfb3bcda22372ec"
],
"answer": [
{
"evidence": [
"With regard to research question RQ2 on the understanding of semantic and linguistic nuances between fake news and satire - a key advantage of studying the coherence metrics is explainability. While the pre-trained model of BERT gives the best result, it is not easily interpretable. The coherence metrics allow us to study the differences between fake news and satire in a straightforward manner."
],
"extractive_spans": [
"coherence metrics"
],
"free_form_answer": "",
"highlighted_evidence": [
"With regard to research question RQ2 on the understanding of semantic and linguistic nuances between fake news and satire - a key advantage of studying the coherence metrics is explainability. While the pre-trained model of BERT gives the best result, it is not easily interpretable. The coherence metrics allow us to study the differences between fake news and satire in a straightforward manner."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate the performance of our method based on the dataset of fake news and satire articles and using the F1 score with a ten-fold cross-validation as in the baseline work BIBREF1.",
"First, we consider the semantic representation with BERT. Our experiments included multiple pre-trained models of BERT with different sizes and cases sensitivity, among which the large uncased model, bert_uncased_L-24_H-1024_A-16, gave the best results. We use the recommended settings of hyper-parameters in BERT's Github repository and use the fake news and satire data to fine-tune the model. Furthermore, we tested separate models based on the headline and body text of a story, and in combination. Results are shown in Table TABREF6. The models based on the headline and text body give a similar F1 score. However, while the headline model performs poorly on precision, perhaps due to the short text, the model based on the text body performs poorly on recall. The model based on the full text of headline and body gives the best performance.",
"With regard to research question RQ2 on the understanding of semantic and linguistic nuances between fake news and satire - a key advantage of studying the coherence metrics is explainability. While the pre-trained model of BERT gives the best result, it is not easily interpretable. The coherence metrics allow us to study the differences between fake news and satire in a straightforward manner."
],
"extractive_spans": [],
"free_form_answer": "Empirical evaluation has done using 10 fold cross-validation considering semantic representation with BERT and measuring differences between fake news and satire using coherence metric.",
"highlighted_evidence": [
"We evaluate the performance of our method based on the dataset of fake news and satire articles and using the F1 score with a ten-fold cross-validation as in the baseline work BIBREF1.\n\nFirst, we consider the semantic representation with BERT.",
"With regard to research question RQ2 on the understanding of semantic and linguistic nuances between fake news and satire - a key advantage of studying the coherence metrics is explainability. While the pre-trained model of BERT gives the best result, it is not easily interpretable. The coherence metrics allow us to study the differences between fake news and satire in a straightforward manner."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"e160ddee743cc94f7632b39157228b9cb102611c",
"f0664796023701ba79d9b65837d9659ad9f40150"
],
"answer": [
{
"evidence": [
"In their work, Golbeck et al. studied whether there are differences in the language of fake news and satirical articles on the same topic that could be utilized with a word-based classification approach. A model using the Naive Bayes Multinomial algorithm is proposed in their paper which serves as the baseline in our experiments.",
"We evaluate the performance of our method based on the dataset of fake news and satire articles and using the F1 score with a ten-fold cross-validation as in the baseline work BIBREF1."
],
"extractive_spans": [
"Naive Bayes Multinomial algorithm"
],
"free_form_answer": "",
"highlighted_evidence": [
" A model using the Naive Bayes Multinomial algorithm is proposed in their paper which serves as the baseline in our experiments.",
"We evaluate the performance of our method based on the dataset of fake news and satire articles and using the F1 score with a ten-fold cross-validation as in the baseline work BIBREF1."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In their work, Golbeck et al. studied whether there are differences in the language of fake news and satirical articles on the same topic that could be utilized with a word-based classification approach. A model using the Naive Bayes Multinomial algorithm is proposed in their paper which serves as the baseline in our experiments."
],
"extractive_spans": [
"model using the Naive Bayes Multinomial algorithm"
],
"free_form_answer": "",
"highlighted_evidence": [
"A model using the Naive Bayes Multinomial algorithm is proposed in their paper which serves as the baseline in our experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"26af93833326a12ee981fca1fbaffe138c00c468",
"87d621c7f00f4950769d4472d758455d2a3f941f"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Significant components of our logistic regression model using the Coh-Metrix features. Variables are also separated by their association with either satire or fake news. Bold: the remaining features following the step-wise backward elimination. Note: *** p < 0.001, ** p < 0.01, * p < 0.05."
],
"extractive_spans": [],
"free_form_answer": "First person singular pronoun incidence\nSentence length, number of words, \nEstimates of hypernymy for nouns \n...\nAgentless passive voice density,\nAverage word frequency for content words ,\nAdverb incidence\n\n...",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Significant components of our logistic regression model using the Coh-Metrix features. Variables are also separated by their association with either satire or fake news. Bold: the remaining features following the step-wise backward elimination. Note: *** p < 0.001, ** p < 0.01, * p < 0.05."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To investigate the predictive power of the linguistic cues, we use those Coh-Metrix indices that were significant in both the logistic and step-wise backward elimination regression models, and train a classifier on fake news and satire articles. We tested a few classification models, including Naive Bayes, Support Vector Machine (SVM), logistic regression, and gradient boosting - among which the SVM classifier gave the best results."
],
"extractive_spans": [
"Coh-Metrix indices"
],
"free_form_answer": "",
"highlighted_evidence": [
"To investigate the predictive power of the linguistic cues, we use those Coh-Metrix indices that were significant in both the logistic and step-wise backward elimination regression models, and train a classifier on fake news and satire articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"638a9be92f66cde63287c747393983594fad7f21",
"733ead5a96b484c8bd671398b2f931d8c3118a45"
],
"answer": [
{
"evidence": [
"To study the semantic nuances between fake news and satire, we use BERT BIBREF8, which stands for Bidirectional Encoder Representations from Transformers, and represents a state-of-the-art contextual language model. BERT is a method for pre-training language representations, meaning that it is pre-trained on a large text corpus and then used for downstream NLP tasks. Word2Vec BIBREF9 showed that we can use vectors to properly represent words in a way that captures semantic or meaning-related relationships. While Word2Vec is a context-free model that generates a single word-embedding for each word in the vocabulary, BERT generates a representation of each word that is based on the other words in the sentence. It was built upon recent work in pre-training contextual representations, such as ELMo BIBREF10 and ULMFit BIBREF11, and is deeply bidirectional, representing each word using both its left and right context. We use the pre-trained models of BERT and fine-tune it on the dataset of fake news and satire articles using Adam optimizer with 3 types of decay and 0.01 decay rate. Our BERT-based binary classifier is created by adding a single new layer in BERT's neural network architecture that will be trained to fine-tune BERT to our task of classifying fake news and satire articles."
],
"extractive_spans": [
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"To study the semantic nuances between fake news and satire, we use BERT BIBREF8, which stands for Bidirectional Encoder Representations from Transformers, and represents a state-of-the-art contextual language model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To study the semantic nuances between fake news and satire, we use BERT BIBREF8, which stands for Bidirectional Encoder Representations from Transformers, and represents a state-of-the-art contextual language model. BERT is a method for pre-training language representations, meaning that it is pre-trained on a large text corpus and then used for downstream NLP tasks. Word2Vec BIBREF9 showed that we can use vectors to properly represent words in a way that captures semantic or meaning-related relationships. While Word2Vec is a context-free model that generates a single word-embedding for each word in the vocabulary, BERT generates a representation of each word that is based on the other words in the sentence. It was built upon recent work in pre-training contextual representations, such as ELMo BIBREF10 and ULMFit BIBREF11, and is deeply bidirectional, representing each word using both its left and right context. We use the pre-trained models of BERT and fine-tune it on the dataset of fake news and satire articles using Adam optimizer with 3 types of decay and 0.01 decay rate. Our BERT-based binary classifier is created by adding a single new layer in BERT's neural network architecture that will be trained to fine-tune BERT to our task of classifying fake news and satire articles."
],
"extractive_spans": [
"BERT "
],
"free_form_answer": "",
"highlighted_evidence": [
"To study the semantic nuances between fake news and satire, we use BERT BIBREF8, which stands for Bidirectional Encoder Representations from Transformers, and represents a state-of-the-art contextual language model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What nuances between fake news and satire were discovered?",
"What empirical evaluation was used?",
"What is the baseline?",
"Which linguistic features are used?",
"What contextual language model is used?"
],
"question_id": [
"0a70af6ba334dfd3574991b1dd06f54fc6a700f2",
"98b97d24f31e9c535997e9b6cb126eb99fc72a90",
"71b07d08fb6ac8732aa4060ae94ec7c0657bb1db",
"812c974311747f74c3aad23999bfef50539953c8",
"180c7bea8caf05ca97d9962b90eb454be4176425"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"humor",
"humor",
"humor",
"humor",
"humor"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Significant components of our logistic regression model using the Coh-Metrix features. Variables are also separated by their association with either satire or fake news. Bold: the remaining features following the step-wise backward elimination. Note: *** p < 0.001, ** p < 0.01, * p < 0.05.",
"Table 2: Results of classification between fake news and satire articles using BERT pre-trained models, based on the headline, body and full text. Bold: best performing model. P: Precision, and R: Recall",
"Table 3: Summary of results of classification between fake news and satire articles using the baseline Multinomial Naive Bayes method, the linguistic cues of text coherence and semantic representation with a pretrained BERT model. Statistically significant differences with the baseline are marked with ’*’. Bold: best performing model. P: Precision, and R: Recall"
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What empirical evaluation was used?",
"Which linguistic features are used?"
] | [
[
"1910.01160-Evaluation ::: Classification Between Fake News and Satire-0",
"1910.01160-Evaluation ::: Classification Between Fake News and Satire-1",
"1910.01160-Evaluation ::: Insights on Linguistic Nuances-0"
],
[
"1910.01160-3-Table1-1.png",
"1910.01160-Evaluation ::: Classification Between Fake News and Satire-2"
]
] | [
"Empirical evaluation has done using 10 fold cross-validation considering semantic representation with BERT and measuring differences between fake news and satire using coherence metric.",
"First person singular pronoun incidence\nSentence length, number of words, \nEstimates of hypernymy for nouns \n...\nAgentless passive voice density,\nAverage word frequency for content words ,\nAdverb incidence\n\n..."
] | 250 |
1602.07776 | Recurrent Neural Network Grammars | We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese. | {
"paragraphs": [
[
"Sequential recurrent neural networks (RNNs) are remarkably effective models of natural language. In the last few years, language model results that substantially improve over long-established state-of-the-art baselines have been obtained using RNNs BIBREF0 , BIBREF1 as well as in various conditional language modeling tasks such as machine translation BIBREF2 , image caption generation BIBREF3 , and dialogue generation BIBREF4 . Despite these impressive results, sequential models are a priori inappropriate models of natural language, since relationships among words are largely organized in terms of latent nested structures rather than sequential surface order BIBREF5 .",
"In this paper, we introduce recurrent neural network grammars (RNNGs; § SECREF2 ), a new generative probabilistic model of sentences that explicitly models nested, hierarchical relationships among words and phrases. RNNGs operate via a recursive syntactic process reminiscent of probabilistic context-free grammar generation, but decisions are parameterized using RNNs that condition on the entire syntactic derivation history, greatly relaxing context-free independence assumptions.",
"The foundation of this work is a top-down variant of transition-based parsing (§ SECREF3 ). We give two variants of the algorithm, one for parsing (given an observed sentence, transform it into a tree), and one for generation. While several transition-based neural models of syntactic generation exist BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , these have relied on structure building operations based on parsing actions in shift-reduce and left-corner parsers which operate in a largely bottom-up fashion. While this construction is appealing because inference is relatively straightforward, it limits the use of top-down grammar information, which is helpful for generation BIBREF11 . RNNGs maintain the algorithmic convenience of transition-based parsing but incorporate top-down (i.e., root-to-terminal) syntactic information (§ SECREF4 ).",
"The top-down transition set that RNNGs are based on lends itself to discriminative modeling as well, where sequences of transitions are modeled conditional on the full input sentence along with the incrementally constructed syntactic structures. Similar to previously published discriminative bottom-up transition-based parsers BIBREF7 , BIBREF12 , BIBREF13 , greedy prediction with our model yields a linear-time deterministic parser (provided an upper bound on the number of actions taken between processing subsequent terminal symbols is imposed); however, our algorithm generates arbitrary tree structures directly, without the binarization required by shift-reduce parsers. The discriminative model also lets us use ancestor sampling to obtain samples of parse trees for sentences, and this is used to solve a second practical challenge with RNNGs: approximating the marginal likelihood and MAP tree of a sentence under the generative model. We present a simple importance sampling algorithm which uses samples from the discriminative parser to solve inference problems in the generative model (§ SECREF5 ).",
"Experiments show that RNNGs are effective for both language modeling and parsing (§ SECREF6 ). Our generative model obtains (i) the best-known parsing results using a single supervised generative model and (ii) better perplexities in language modeling than state-of-the-art sequential LSTM language models. Surprisingly—although in line with previous parsing results showing the effectiveness of generative models BIBREF7 , BIBREF14 —parsing with the generative model obtains significantly better results than parsing with the discriminative model."
],
[
"Formally, an RNNG is a triple INLINEFORM0 consisting of a finite set of nonterminal symbols ( INLINEFORM1 ), a finite set of terminal symbols ( INLINEFORM2 ) such that INLINEFORM3 , and a collection of neural network parameters INLINEFORM4 . It does not explicitly define rules since these are implicitly characterized by INLINEFORM5 . The algorithm that the grammar uses to generate trees and strings in the language is characterized in terms of a transition-based algorithm, which is outlined in the next section. In the section after that, the semantics of the parameters that are used to turn this into a stochastic algorithm that generates pairs of trees and strings are discussed."
],
[
"RNNGs are based on a top-down generation algorithm that relies on a stack data structure of partially completed syntactic constituents. To emphasize the similarity of our algorithm to more familiar bottom-up shift-reduce recognition algorithms, we first present the parsing (rather than generation) version of our algorithm (§ SECREF4 ) and then present modifications to turn it into a generator (§ SECREF19 )."
],
[
"The parsing algorithm transforms a sequence of words INLINEFORM0 into a parse tree INLINEFORM1 using two data structures (a stack and an input buffer). As with the bottom-up algorithm of BIBREF12 , our algorithm begins with the stack ( INLINEFORM2 ) empty and the complete sequence of words in the input buffer ( INLINEFORM3 ). The buffer contains unprocessed terminal symbols, and the stack contains terminal symbols, “open” nonterminal symbols, and completed constituents. At each timestep, one of the following three classes of operations (Fig. FIGREF9 ) is selected by a classifier, based on the current contents on the stack and buffer:",
" INLINEFORM0 introduces an “open nonterminal” X onto the top of the stack. Open nonterminals are written as a nonterminal symbol preceded by an open parenthesis, e.g., “(VP”, and they represent a nonterminal whose child nodes have not yet been fully constructed. Open nonterminals are “closed” to form complete constituents by subsequent reduce operations.",
"shift removes the terminal symbol INLINEFORM0 from the front of the input buffer, and pushes it onto the top of the stack.",
"reduce repeatedly pops completed subtrees or terminal symbols from the stack until an open nonterminal is encountered, and then this open NT is popped and used as the label of a new constituent that has the popped subtrees as its children. This new completed constituent is pushed onto the stack as a single composite item. A single reduce operation can thus create constituents with an unbounded number of children.",
"The parsing algorithm terminates when there is a single completed constituent on the stack and the buffer is empty. Fig. FIGREF10 shows an example parse using our transition set. Note that in this paper we do not model preterminal symbols (i.e., part-of-speech tags) and our examples therefore do not include them.",
"Our transition set is closely related to the operations used in Earley's algorithm which likewise introduces nonterminals symbols with its predict operation and later completes them after consuming terminal symbols one at a time using scan BIBREF15 . It is likewise closely related to the “linearized” parse trees proposed by BIBREF16 and to the top-down, left-to-right decompositions of trees used in previous generative parsing and language modeling work BIBREF11 , BIBREF17 , BIBREF18 .",
"A further connection is to INLINEFORM0 parsing which uses an unbounded lookahead (compactly represented by a DFA) to distinguish between parse alternatives in a top-down parser BIBREF19 ; however, our parser uses an RNN encoding of the lookahead rather than a DFA.",
"To guarantee that only well-formed phrase-structure trees are produced by the parser, we impose the following constraints on the transitions that can be applied at each step which are a function of the parser state INLINEFORM0 where INLINEFORM1 is the number of open nonterminals on the stack:",
"The INLINEFORM0 operation can only be applied if INLINEFORM1 is not empty and INLINEFORM2 .",
"The shift operation can only be applied if INLINEFORM0 is not empty and INLINEFORM1 .",
"The reduce operation can only be applied if the top of the stack is not an open nonterminal symbol.",
"The reduce operation can only be applied if INLINEFORM0 or if the buffer is empty.",
"To designate the set of valid parser transitions, we write INLINEFORM0 ."
],
[
"The parsing algorithm that maps from sequences of words to parse trees can be adapted with minor changes to produce an algorithm that stochastically generates trees and terminal symbols. Two changes are required: (i) there is no input buffer of unprocessed words, rather there is an output buffer ( INLINEFORM0 ), and (ii) instead of a shift operation there are INLINEFORM1 operations which generate terminal symbol INLINEFORM2 and add it to the top of the stack and the output buffer. At each timestep an action is stochastically selected according to a conditional distribution that depends on the current contents of INLINEFORM3 and INLINEFORM4 . The algorithm terminates when a single completed constituent remains on the stack. Fig. FIGREF12 shows an example generation sequence.",
"The generation algorithm also requires slightly modified constraints. These are:",
"The INLINEFORM0 operation can only be applied if INLINEFORM1 .",
"The reduce operation can only be applied if the top of the stack is not an open nonterminal symbol and INLINEFORM0 .",
"To designate the set of valid generator transitions, we write INLINEFORM0 .",
"This transition set generates trees using nearly the same structure building actions and stack configurations as the “top-down PDA” construction proposed by BIBREF20 , albeit without the restriction that the trees be in Chomsky normal form."
],
[
"Any parse tree can be converted to a sequence of transitions via a depth-first, left-to-right traversal of a parse tree. Since there is a unique depth-first, left-ro-right traversal of a tree, there is exactly one transition sequence of each tree. For a tree INLINEFORM0 and a sequence of symbols INLINEFORM1 , we write INLINEFORM2 to indicate the corresponding sequence of generation transitions, and INLINEFORM3 to indicate the parser transitions."
],
[
"A detailed analysis of the algorithmic properties of our top-down parser is beyond the scope of this paper; however, we briefly state several facts. Assuming the availability of constant time push and pop operations, the runtime is linear in the number of the nodes in the parse tree that is generated by the parser/generator (intuitively, this is true since although an individual reduce operation may require applying a number of pops that is linear in the number of input symbols, the total number of pop operations across an entire parse/generation run will also be linear). Since there is no way to bound the number of output nodes in a parse tree as a function of the number of input words, stating the runtime complexity of the parsing algorithm as a function of the input size requires further assumptions. Assuming our fixed constraint on maximum depth, it is linear."
],
[
"Our generation algorithm algorithm differs from previous stack-based parsing/generation algorithms in two ways. First, it constructs rooted tree structures top down (rather than bottom up), and second, the transition operators are capable of directly generating arbitrary tree structures rather than, e.g., assuming binarized trees, as is the case in much prior work that has used transition-based algorithms to produce phrase-structure trees BIBREF12 , BIBREF13 , BIBREF21 ."
],
[
"RNNGs use the generator transition set just presented to define a joint distribution on syntax trees ( INLINEFORM0 ) and words ( INLINEFORM1 ). This distribution is defined as a sequence model over generator transitions that is parameterized using a continuous space embedding of the algorithm state at each time step ( INLINEFORM2 ); i.e., INLINEFORM3 ",
" and where action-specific embeddings INLINEFORM0 and bias vector INLINEFORM1 are parameters in INLINEFORM2 .",
"The representation of the algorithm state at time INLINEFORM0 , INLINEFORM1 , is computed by combining the representation of the generator's three data structures: the output buffer ( INLINEFORM2 ), represented by an embedding INLINEFORM3 , the stack ( INLINEFORM4 ), represented by an embedding INLINEFORM5 , and the history of actions ( INLINEFORM6 ) taken by the generator, represented by an embedding INLINEFORM7 , INLINEFORM8 ",
" where INLINEFORM0 and INLINEFORM1 are parameters. Refer to Figure FIGREF26 for an illustration of the architecture.",
"The output buffer, stack, and history are sequences that grow unboundedly, and to obtain representations of them we use recurrent neural networks to “encode” their contents BIBREF22 . Since the output buffer and history of actions are only appended to and only contain symbols from a finite alphabet, it is straightforward to apply a standard RNN encoding architecture. The stack ( INLINEFORM0 ) is more complicated for two reasons. First, the elements of the stack are more complicated objects than symbols from a discrete alphabet: open nonterminals, terminals, and full trees, are all present on the stack. Second, it is manipulated using both push and pop operations. To efficiently obtain representations of INLINEFORM1 under push and pop operations, we use stack LSTMs BIBREF23 . To represent complex parse trees, we define a new syntactic composition function that recursively defines representations of trees."
],
[
"When a reduce operation is executed, the parser pops a sequence of completed subtrees and/or tokens (together with their vector embeddings) from the stack and makes them children of the most recent open nonterminal on the stack, “completing” the constituent. To compute an embedding of this new subtree, we use a composition function based on bidirectional LSTMs, which is illustrated in Fig. FIGREF28 .",
"The first vector read by the LSTM in both the forward and reverse directions is an embedding of the label on the constituent being constructed (in the figure, NP). This is followed by the embeddings of the child subtrees (or tokens) in forward or reverse order. Intuitively, this order serves to “notify” each LSTM what sort of head it should be looking for as it processes the child node embeddings. The final state of the forward and reverse LSTMs are concatenated, passed through an affine transformation and a INLINEFORM0 nonlinearity to become the subtree embedding. Because each of the child node embeddings ( INLINEFORM2 , INLINEFORM3 , INLINEFORM4 in Fig. FIGREF28 ) is computed similarly (if it corresponds to an internal node), this composition function is a kind of recursive neural network."
],
[
"To reduce the size of INLINEFORM0 , word generation is broken into two parts. First, the decision to generate is made (by predicting INLINEFORM1 as an action), and then choosing the word, conditional on the current parser state. To further reduce the computational complexity of modeling the generation of a word, we use a class-factored softmax BIBREF26 , BIBREF27 . By using INLINEFORM2 classes for a vocabulary of size INLINEFORM3 , this prediction step runs in time INLINEFORM4 rather than the INLINEFORM5 of the full-vocabulary softmax. To obtain clusters, we use the greedy agglomerative clustering algorithm of BIBREF28 ."
],
[
"The parameters in the model are learned to maximize the likelihood of a corpus of trees."
],
[
"A discriminative parsing model can be obtained by replacing the embedding of INLINEFORM0 at each time step with an embedding of the input buffer INLINEFORM1 . To train this model, the conditional likelihood of each sequence of actions given the input string is maximized."
],
[
"Our generative model INLINEFORM0 defines a joint distribution on trees ( INLINEFORM1 ) and sequences of words ( INLINEFORM2 ). To evaluate this as a language model, it is necessary to compute the marginal probability INLINEFORM3 . And, to evaluate the model as a parser, we need to be able to find the MAP parse tree, i.e., the tree INLINEFORM4 that maximizes INLINEFORM5 . However, because of the unbounded dependencies across the sequence of parsing actions in our model, exactly solving either of these inference problems is intractable. To obtain estimates of these, we use a variant of importance sampling BIBREF31 .",
"Our importance sampling algorithm uses a conditional proposal distribution INLINEFORM0 with the following properties: (i) INLINEFORM1 ; (ii) samples INLINEFORM2 can be obtained efficiently; and (iii) the conditional probabilities INLINEFORM3 of these samples are known. While many such distributions are available, the discriminatively trained variant of our parser (§ SECREF32 ) fulfills these requirements: sequences of actions can be sampled using a simple ancestral sampling approach, and, since parse trees and action sequences exist in a one-to-one relationship, the product of the action probabilities is the conditional probability of the parse tree under INLINEFORM4 . We therefore use our discriminative parser as our proposal distribution.",
"Importance sampling uses importance weights, which we define as INLINEFORM0 , to compute this estimate. Under this definition, we can derive the estimator as follows: INLINEFORM1 ",
" We now replace this expectation with its Monte Carlo estimate as follows, using INLINEFORM0 samples from INLINEFORM1 : INLINEFORM2 ",
" To obtain an estimate of the MAP tree INLINEFORM0 , we choose the sampled tree with the highest probability under the joint model INLINEFORM1 ."
],
[
"We present results of our two models both on parsing (discriminative and generative) and as a language model (generative only) in English and Chinese."
],
[
"It is clear from our experiments that the proposed generative model is quite effective both as a parser and as a language model. This is the result of (i) relaxing conventional independence assumptions (e.g., context-freeness) and (ii) inferring continuous representations of symbols alongside non-linear models of their syntactic relationships. The most significant question that remains is why the discriminative model—which has more information available to it than the generative model—performs worse than the generative model. This pattern has been observed before in neural parsing by BIBREF7 , who hypothesized that larger, unstructured conditioning contexts are harder to learn from, and provide opportunities to overfit. Our discriminative model conditions on the entire history, stack, and buffer, while our generative model only accesses the history and stack. The fully discriminative model of BIBREF16 was able to obtain results similar to those of our generative model (albeit using much larger training sets obtained through semisupervision) but similar results to those of our discriminative parser using the same data. In light of their results, we believe Henderson's hypothesis is correct, and that generative models should be considered as a more statistically efficient method for learning neural networks from small data."
],
[
"Our language model combines work from two modeling traditions: (i) recurrent neural network language models and (ii) syntactic language modeling. Recurrent neural network language models use RNNs to compute representations of an unbounded history of words in a left-to-right language model BIBREF0 , BIBREF1 , BIBREF44 . Syntactic language models jointly generate a syntactic structure and a sequence of words BIBREF45 , BIBREF46 . There is an extensive literature here, but one strand of work has emphasized a bottom-up generation of the tree, using variants of shift-reduce parser actions to define the probability space BIBREF47 , BIBREF8 . The neural-network–based model of BIBREF7 is particularly similar to ours in using an unbounded history in a neural network architecture to parameterize generative parsing based on a left-corner model. Dependency-only language models have also been explored BIBREF9 , BIBREF48 , BIBREF10 . Modeling generation top-down as a rooted branching process that recursively rewrites nonterminals has been explored by BIBREF41 and BIBREF11 . Of particular note is the work of BIBREF18 , which uses random forests and hand-engineered features over the entire syntactic derivation history to make decisions over the next action to take.",
"The neural networks we use to model sentences are structured according to the syntax of the sentence being generated. Syntactically structured neural architectures have been explored in a number of applications, including discriminative parsing BIBREF34 , BIBREF49 , sentiment analysis BIBREF25 , BIBREF24 , and sentence representation BIBREF50 , BIBREF51 . However, these models have been, without exception, discriminative; this is the first work to use syntactically structured neural models to generate language. Earlier work has demonstrated that sequential RNNs have the capacity to recognize context-free (and beyond) languages BIBREF52 , BIBREF53 . In contrast, our work may be understood as a way of incorporating a context-free inductive bias into the model structure."
],
[
"RNNGs can be combined with a particle filter inference scheme (rather than the importance sampling method based on a discriminative parser, § SECREF5 ) to produce a left-to-right marginalization algorithm that runs in expected linear time. Thus, they could be used in applications that require language models.",
"A second possibility is to replace the sequential generation architectures found in many neural network transduction problems that produce sentences conditioned on some input. Previous work in machine translation has showed that conditional syntactic models can function quite well without the computationally expensive marginalization process at decoding time BIBREF54 , BIBREF55 .",
"A third consideration regarding how RNNGs, human sentence processing takes place in a left-to-right, incremental order. While an RNNG is not a processing model (it is a grammar), the fact that it is left-to-right opens up several possibilities for developing new sentence processing models based on an explicit grammars, similar to the processing model of BIBREF18 .",
"Finally, although we considered only the supervised learning scenario, RNNGs are joint models that could be trained without trees, for example, using expectation maximization."
],
[
"We introduced recurrent neural network grammars, a probabilistic model of phrase-structure trees that can be trained generatively and used as a language model or a parser, and a corresponding discriminative model that can be used as a parser. Apart from out-of-vocabulary preprocessing, the approach requires no feature design or transformations to treebank data. The generative model outperforms every previously published parser built on a single supervised generative model in English, and a bit behind the best-reported generative model in Chinese. As language models, RNNGs outperform the best single-sentence language models."
],
[
"We thank Brendan O'Connor, Swabha Swayamdipta, and Brian Roark for feedback on drafts of this paper, and Jan Buys, Phil Blunsom, and Yue Zhang for help with data preparation. This work was sponsored in part by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O) under the Low Resource Languages for Emergent Incidents (LORELEI) program issued by DARPA/I2O under Contract No. HR0011-15-C-0114; it was also supported in part by Contract No. W911NF-15-1-0543 with the DARPA and the Army Research Office (ARO). Approved for public release, distribution unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Miguel Ballesteros was supported by the European Commission under the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA).",
"2pt Chris Dyer INLINEFORM0 Adhiguna Kuncoro INLINEFORM1 Miguel Ballesteros INLINEFORM2 Noah A. Smith INLINEFORM3 INLINEFORM4 School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA INLINEFORM5 NLP Group, Pompeu Fabra University, Barcelona, Spain INLINEFORM6 Google DeepMind, London, UK INLINEFORM7 Computer Science & Engineering, University of Washington, Seattle, WA, USA {cdyer,akuncoro}@cs.cmu.edu [email protected], [email protected] [ Corrigendum to Recurrent Neural Network Grammars ] Due to an implentation bug in the RNNG's recursive composition function, the results reported in Dyer et al. (2016) did not correspond to the model as it was presented. This corrigendum describes the buggy implementation and reports results with a corrected implementation. After correction, on the PTB §23 and CTB 5.1 test sets, respectively, the generative model achieves language modeling perplexities of 105.2 and 148.5, and phrase-structure parsing F1 of 93.3 and 86.9, a new state of the art in phrase-structure parsing for both languages. RNNG Composition Function and Implementation Error The composition function reduces a completed constituent into a single vector representation using a bidirectional LSTM (Figure FIGREF47 ) over embeddings of the constituent's children as well as an embedding of the resulting nonterminal symbol type. The implementation error (Figure FIGREF47 ) composed the constituent (NP the hungry cat) by reading the sequence “NP the hungry NP”, that is, it discarded the rightmost child of every constituent and replaced it with a second copy of the constituent's nonterminal symbol. This error occurs for every constituent and means crucial information is not properly propagated upwards in the tree. Results after Correction The implementation error affected both the generative and discriminative RNNGs. We summarize corrected English phrase-structure PTB §23 parsing result in Table TABREF49 , Chinese (CTB 5.1 §271–300) in Table TABREF50 (achieving the the best reported result on both datasets), and English and Chinese language modeling perplexities in Table TABREF51 . The considerable improvement in parsing accuracy indicates that properly composing the constituent and propagating information upwards is crucial. Despite slightly higher language modeling perplexity on PTB §23, the fixed RNNG still outperforms a highly optimized sequential LSTM baseline. "
]
],
"section_name": [
"Introduction",
"RNN Grammars",
"Top-down Parsing and Generation",
"Parser Transitions",
"Generator Transitions",
"Transition Sequences from Trees",
"Runtime Analysis",
"Comparison to Other Models",
"Generative Model",
"Syntactic Composition Function",
"Word Generation",
"Training",
"Discriminative Parsing Model",
"Inference via Importance Sampling",
"Experiments",
"Discussion",
"Related Work",
"Outlook",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"05e8e7853a199577e1b2ad5d86b9eebef130d834",
"5255ee9e2061fa98639f64c7a925c0a574898b84"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Parsing results on PTB §23 (D=discriminative, G=generative, S=semisupervised). ? indicates the (Vinyals et al., 2015) result with trained only on the WSJ corpus without ensembling.",
"FLOAT SELECTED: Table 3: Parsing results on CTB 5.1.",
"FLOAT SELECTED: Table 4: Language model perplexity results."
],
"extractive_spans": [],
"free_form_answer": "Vinyals et al (2015) for English parsing, Wang et al (2015) for Chinese parsing, and LSTM LM for Language modeling both in English and Chinese ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Parsing results on PTB §23 (D=discriminative, G=generative, S=semisupervised). ? indicates the (Vinyals et al., 2015) result with trained only on the WSJ corpus without ensembling.",
"FLOAT SELECTED: Table 3: Parsing results on CTB 5.1.",
"FLOAT SELECTED: Table 4: Language model perplexity results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: Language model perplexity results."
],
"extractive_spans": [],
"free_form_answer": "IKN 5-gram, LSTM LM",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Language model perplexity results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"what state of the art models do they compare to?"
],
"question_id": [
"95083d486769b9b5e8c57fe2ef1b452fc3ea5012"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Figure 2: Top-down parsing example.",
"Figure 4: Joint generation of a parse tree and sentence.",
"Figure 1: Parser transitions showing the stack, buffer, and open nonterminal count before and after each action type. S represents the stack, which contains open nonterminals and completed subtrees; B represents the buffer of unprocessed terminal symbols; x is a terminal symbol, X is a nonterminal symbol, and each τ is a completed subtree. The top of the stack is to the right, and the buffer is consumed from left to right. Elements on the stack and buffer are delimited by a vertical bar ( | ).",
"Figure 6: Syntactic composition function based on bidirectional LSTMs that is executed during a REDUCE operation; the network on the right models the structure on the left.",
"Figure 5: Neural architecture for defining a distribution over at given representations of the stack (St), output buffer (Tt) and history of actions (a<t). Details of the composition architecture of the NP, the action history LSTM, and the other elements of the stack are not shown. This architecture corresponds to the generator state at line 7 of Figure 4.",
"Table 1: Corpus statistics.",
"Table 2: Parsing results on PTB §23 (D=discriminative, G=generative, S=semisupervised). ? indicates the (Vinyals et al., 2015) result with trained only on the WSJ corpus without ensembling.",
"Table 3: Parsing results on CTB 5.1.",
"Table 4: Language model perplexity results.",
"Figure 7: Correct RNNG composition function for the constituent (NP the hungry cat).",
"Figure 8: Buggy implementation of the RNNG composition function for the constituent (NP the hungry cat). Note that the right-most child, cat, has been replaced by a second NP.",
"Table 6: Parsing results on CTB 5.1 including results with the buggy composition function implementation (indicated by †) and with the correct implementation.",
"Table 7: PTB and CTB language modeling results including results with the buggy composition function implementation (indicated by †) and with the correct implementation."
],
"file": [
"4-Figure2-1.png",
"4-Figure4-1.png",
"4-Figure1-1.png",
"5-Figure6-1.png",
"6-Figure5-1.png",
"7-Table1-1.png",
"8-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"13-Figure7-1.png",
"13-Figure8-1.png",
"13-Table6-1.png",
"13-Table7-1.png"
]
} | [
"what state of the art models do they compare to?"
] | [
[
"1602.07776-8-Table4-1.png",
"1602.07776-8-Table2-1.png",
"1602.07776-8-Table3-1.png"
]
] | [
"IKN 5-gram, LSTM LM"
] | 251 |
1809.02208 | Assessing Gender Bias in Machine Translation -- A Case Study with Google Translate | Recently there has been a growing concern about machine bias, where trained statistical models grow to reflect controversial societal asymmetries, such as gender or racial bias. A significant number of AI tools have recently been suggested to be harmfully biased towards some minority, with reports of racist criminal behavior predictors, Iphone X failing to differentiate between two Asian people and Google photos' mistakenly classifying black people as gorillas. Although a systematic study of such biases can be difficult, we believe that automated translation tools can be exploited through gender neutral languages to yield a window into the phenomenon of gender bias in AI. In this paper, we start with a comprehensive list of job positions from the U.S. Bureau of Labor Statistics (BLS) and used it to build sentences in constructions like"He/She is an Engineer"in 12 different gender neutral languages such as Hungarian, Chinese, Yoruba, and several others. We translate these sentences into English using the Google Translate API, and collect statistics about the frequency of female, male and gender-neutral pronouns in the translated output. We show that GT exhibits a strong tendency towards male defaults, in particular for fields linked to unbalanced gender distribution such as STEM jobs. We ran these statistics against BLS' data for the frequency of female participation in each job position, showing that GT fails to reproduce a real-world distribution of female workers. We provide experimental evidence that even if one does not expect in principle a 50:50 pronominal gender distribution, GT yields male defaults much more frequently than what would be expected from demographic data alone. We are hopeful that this work will ignite a debate about the need to augment current statistical translation tools with debiasing techniques which can already be found in the scientific literature. | {
"paragraphs": [
[
"Although the idea of automated translation can in principle be traced back to as long as the 17th century with René Descartes proposal of an “universal language” BIBREF0 , machine translation has only existed as a technological field since the 1950s, with a pioneering memorandum by Warren Weaver BIBREF1 , BIBREF2 discussing the possibility of employing digital computers to perform automated translation. The now famous Georgetown-IBM experiment followed not long after, providing the first experimental demonstration of the prospects of automating translation by the means of successfully converting more than sixty Russian sentences into English BIBREF3 . Early systems improved upon the results of the Georgetown-IBM experiment by exploiting Noam Chomsky's theory of generative linguistics, and the field experienced a sense of optimism about the prospects of fully automating natural language translation. As is customary with artificial intelligence, the initial optimistic stage was followed by an extended period of strong disillusionment with the field, of which the catalyst was the influential 1966 ALPAC (Automatic Language Processing Advisory Committee) report( BIBREF4 . Such research was then disfavoured in the United States, making a re-entrance in the 1970s before the 1980s surge in statistical methods for machine translation BIBREF5 , BIBREF6 . Statistical and example-based machine translation have been on the rise ever since BIBREF7 , BIBREF8 , BIBREF9 , with highly successful applications such as Google Translate (recently ported to a neural translation technology BIBREF10 ) amounting to over 200 million users daily.",
"In spite of the recent commercial success of automated translation tools (or perhaps stemming directly from it), machine translation has amounted a significant deal of criticism. Noted philosopher and founding father of generative linguistics Noam Chomsky has argued that the achievements of machine translation, while successes in a particular sense, are not successes in the sense that science has ever been interested in: they merely provide effective ways, according to Chomsky, of approximating unanalyzed data BIBREF11 , BIBREF12 . Chomsky argues that the faith of the MT community in statistical methods is absurd by analogy with a standard scientific field such as physics BIBREF11 :",
"I mean actually you could do physics this way, instead of studying things like balls rolling down frictionless planes, which can't happen in nature, if you took a ton of video tapes of what's happening outside my office window, let's say, you know, leaves flying and various things, and you did an extensive analysis of them, you would get some kind of prediction of what's likely to happen next, certainly way better than anybody in the physics department could do. Well that's a notion of success which is I think novel, I don't know of anything like it in the history of science.",
"Leading AI researcher and Google's Director of Research Peter Norvig responds to these arguments by suggesting that even standard physical theories such as the Newtonian model of gravitation are, in a sense, trained BIBREF12 :",
"As another example, consider the Newtonian model of gravitational attraction, which says that the force between two objects of mass INLINEFORM0 and INLINEFORM1 a distance INLINEFORM2 apart is given by INLINEFORM3 ",
"where INLINEFORM0 is the universal gravitational constant. This is a trained model because the gravitational constant G is determined by statistical inference over the results of a series of experiments that contain stochastic experimental error. It is also a deterministic (non-probabilistic) model because it states an exact functional relationship. I believe that Chomsky has no objection to this kind of statistical model. Rather, he seems to reserve his criticism for statistical models like Shannon's that have quadrillions of parameters, not just one or two.",
"Chomsky and Norvig's debate BIBREF12 is a microcosm of the two leading standpoints about the future of science in the face of increasingly sophisticated statistical models. Are we, as Chomsky seems to argue, jeopardizing science by relying on statistical tools to perform predictions instead of perfecting traditional science models, or are these tools, as Norvig argues, components of the scientific standard since its conception? Currently there are no satisfactory resolutions to this conundrum, but perhaps statistical models pose an even greater and more urgent threat to our society.",
"On a 2014 article, Londa Schiebinger suggested that scientific research fails to take gender issues into account, arguing that the phenomenon of male defaults on new technologies such as Google Translate provides a window into this asymmetry BIBREF13 . Since then, recent worrisome results in machine learning have somewhat supported Schiebinger's view. Not only Google photos' statistical image labeling algorithm has been found to classify dark-skinned people as gorillas BIBREF14 and purportedly intelligent programs have been suggested to be negatively biased against black prisoners when predicting criminal behavior BIBREF15 but the machine learning revolution has also indirectly revived heated debates about the controversial field of physiognomy, with proposals of AI systems capable of identifying the sexual orientation of an individual through its facial characteristics BIBREF16 . Similar concerns are growing at an unprecedented rate in the media, with reports of Apple's Iphone X face unlock feature failing to differentiate between two different Asian people BIBREF17 and automatic soap dispensers which reportedly do not recognize black hands BIBREF18 . Machine bias, the phenomenon by which trained statistical models unbeknownst to their creators grow to reflect controversial societal asymmetries, is growing into a pressing concern for the modern times, invites us to ask ourselves whether there are limits to our dependence on these techniques – and more importantly, whether some of these limits have already been traversed. In the wave of algorithmic bias, some have argued for the creation of some kind of agency in the likes of the Food and Drug Administration, with the sole purpose of regulating algorithmic discrimination BIBREF19 .",
"With this in mind, we propose a quantitative analysis of the phenomenon of gender bias in machine translation. We illustrate how this can be done by simply exploiting Google Translate to map sentences from a gender neutral language into English. As Figure FIGREF1 exemplifies, this approach produces results consistent with the hypothesis that sentences about stereotypical gender roles are translated accordingly with high probability: nurse and baker are translated with female pronouns while engineer and CEO are translated with male ones."
],
[
"As of 2018, Google Translate is one of the largest publicly available machine translation tools in existence, amounting 200 million users daily BIBREF21 . Initially relying on United Nations and European Parliament transcripts to gather data, since 2014 Google Translate has inputed content from its users through the Translate Community initiative BIBREF22 . Recently however there has been a growing concern about gender asymmetries in the translation mechanism, with some heralding it as “sexist” BIBREF23 . This concern has to at least some extent a scientific backup: A recent study has shown that word embeddings are particularly prone to yielding gender stereotypes BIBREF24 . Fortunately, the researchers propose a relatively simple debiasing algorithm with promising results: they were able to cut the proportion of stereotypical analogies from INLINEFORM0 to INLINEFORM1 without any significant compromise in the performance of the word embedding technique. They are not alone: there is a growing effort to systematically discover and resolve issues of algorithmic bias in black-box algorithms BIBREF25 . The success of these results suggest that a similar technique could be used to remove gender bias from Google Translate outputs, should it exist. This paper intends to investigate whether it does. We are optimistic that our research endeavors can be used to argue that there is a positive payoff in redesigning modern statistical translation tools."
],
[
"In this paper we assume that a statistical translation tool should reflect at most the inequality existent in society – it is only logical that a translation tool will poll from examples that society produced and, as such, will inevitably retain some of that bias. It has been argued that one's language affects one's knowledge and cognition about the world BIBREF26 , and this leads to the discussion that languages that distinguish between female and male genders grammatically may enforce a bias in the person's perception of the world, with some studies corroborating this, as shown in BIBREF27 , as well some relating this with sexism BIBREF28 and gender inequalities BIBREF29 .",
"With this in mind, one can argue that a move towards gender neutrality in language and communication should be striven as a means to promote improved gender equality. Thus, in languages where gender neutrality can be achieved – such as English – it would be a valid aim to create translation tools that keep the gender-neutrality of texts translated into such a language, instead of defaulting to male or female variants.",
"We will thus assume throughout this paper that although the distribution of translated gender pronouns may deviate from 50:50, it should not deviate to the extent of misrepresenting the demographics of job positions. That is to say we shall assume that Google Translate incorporates a negative gender bias if the frequency of male defaults overestimates the (possibly unequal) distribution of male employees per female employee in a given occupation."
],
[
"We shall assume and then show that the phenomenon of gender bias in machine translation can be assessed by mapping sentences constructed in gender neutral languages to English by the means of an automated translation tool. Specifically, we can translate sentences such as the Hungarian “ő egy ápolónő”, where “ápolónő” translates to “nurse” and “ő” is a gender-neutral pronoun meaning either he, she or it, to English, yielding in this example the result “she's a nurse” on Google Translate. As Figure FIGREF1 clearly shows, the same template yields a male pronoun when “nurse” is replaced by “engineer”. The same basic template can be ported to all other gender neutral languages, as depicted in Table TABREF4 . Given the success of Google Translate, which amounts to 200 million users daily, we have chosen to exploit its API to obtain the desired thermometer of gender bias. Also, in order to solidify our results, we have decided to work with a fair amount of gender neutral languages, forming a list of these with help from the World Atlas of Language Structures (WALS) BIBREF30 and other sources. Table TABREF2 compiles all languages we chose to use, with additional columns informing whether they (1) exhibit a gender markers in the sentence and (2) are supported by Google Translate. However, we stumbled on some difficulties which led to some of those langauges being removed, which will be explained in . There is a prohibitively large class of nouns and adjectives that could in principle be substituted into our templates. To simplify our dataset, we have decided to focus our work on job positions – which, we believe, are an interesting window into the nature of gender bias –, and were able to obtain a comprehensive list of professional occupations from the Bureau of Labor Statistics' detailed occupations table BIBREF31 , from the United States Department of Labor. The values inside, however, had to be expanded since each line contained multiple occupations and sometimes very specific ones. Fortunately this table also provided a percentage of women participation in the jobs shown, for those that had more than 50 thousand workers. We filtered some of these because they were too generic ( “Computer occupations, all other”, and others) or because they had gender specific words for the profession (“host/hostess”, “waiter/waitress”). We then separated the curated jobs into broader categories (Artistic, Corporate, Theatre, etc.) as shown in Table TABREF3 . Finally, Table TABREF5 shows thirty examples of randomly selected occupations from our dataset. For the occupations that had less than 50 thousand workers, and thus no data about the participation of women, we assumed that its women participation was that of its upper category. Finally, as complementary evidence we have decided to include a small subset of 21 adjectives in our study. All adjectives were obtained from the top one thousand most frequent words in this category as featured in the Corpus of Contemporary American English (COCA) https://corpus.byu.edu/coca/, but it was necessary to manually curate them because a substantial fraction of these adjectives cannot be applied to human subjects. Also because the sentiment associated with each adjective is not as easily accessible as for example the occupation category of each job position, we performed a manual selection of a subset of such words which we believe to be meaningful to this study. These words are presented in Table TABREF6 . We made all code and data used to generate and compile the results presented in the following sections publicly available in the following Github repository: https://github.com/marceloprates/Gender-Bias. Note however that because the Google Translate algorithm can change, unfortunately we cannot guarantee full reproducibility of our results. All experiments reported here were conducted on April 2018."
],
[
"While it is possible to construct gender neutral sentences in two of the languages omitted in our experiments (namely Korean and Nepali), we have chosen to omit them for the following reasons:",
"We faced technical difficulties to form templates and automatically translate sentences with the right-to-left, top-to-bottom nature of the script and, as such, we have decided not to include it in our experiments.",
"Due to Nepali having a rather complex grammar, with possible male/female gender demarcations on the phrases and due to none of the authors being fluent or able to reach someone fluent in the language, we were not confident enough in our ability to produce the required templates. Bengali was almost discarded under the same rationale, but we have decided to keep it because of our sentence template for Bengali has a simple grammatical structure which does not require any kind of inflection.",
"One can construct gender neutral phrases in Korean by omitting the gender pronoun; in fact, this is the default procedure. However, the expressiveness of this omission depends on the context of the sentence being clear, which is not possible in the way we frame phrases."
],
[
"A sensible way to group translation data is to coalesce occupations in the same category and collect statistics among languages about how prominent male defaults are in each field. What we have found is that Google Translate does indeed translate sentences with male pronouns with greater probability than it does either with female or gender-neutral pronouns, in general. Furthermore, this bias is seemingly aggravated for fields suggested to be troubled by male stereotypes, such as life and physical sciences, architecture, engineering, computer science and mathematics BIBREF32 . Table TABREF11 summarizes these data, and Table TABREF12 summarizes it even further by coalescing occupation categories into broader groups to ease interpretation. For instance, STEM (Science, Technology, Engineering and Mathematics) fields are grouped into a single category, which helps us compare the large asymmetry between gender pronouns in these fields ( INLINEFORM0 of male defaults) to that of more evenly distributed fields such as healthcare ( INLINEFORM1 ).",
"Plotting histograms for the number of gender pronouns per occupation category sheds further light on how female, male and gender-neutral pronouns are differently distributed. The histogram in Figure FIGREF13 suggests that the number of female pronouns is inversely distributed – which is mirrored in the data for gender-neutral pronouns in Figure FIGREF15 –, while the same data for male pronouns (shown in Figure FIGREF14 ) suggests a skew normal distribution. Furthermore we can see both on Figures FIGREF13 and FIGREF14 how STEM fields (labeled in beigeexhibit predominantly male defaults – amounting predominantly near INLINEFORM0 in the female histogram although much to the right in the male histogram.",
"These values contrast with BLS' report of gender participation, which will be discussed in more detail in Section SECREF8 .",
"We can also visualize male, female, and gender neutral histograms side by side, in which context is useful to compare the dissimilar distributions of translated STEM and Healthcare occupations (Figures FIGREF16 and FIGREF17 respectively). The number of translated female pronouns among languages is not normally distributed for any of the individual categories in Table TABREF3 , but Healthcare is in many ways the most balanced category, which can be seen in comparison with STEM – in which male defaults are second to most prominent.",
"The bar plots in Figure FIGREF18 help us visualize how much of the distribution of each occupation category is composed of female, male and gender-neutral pronouns. In this context, STEM fields, which show a predominance of male defaults, are contrasted with Healthcare and educations, which show a larger proportion of female pronouns.",
"Although computing our statistics over the set of all languages has practical value, this may erase subtleties characteristic to each individual idiom. In this context, it is also important to visualize how each language translates job occupations in each category. The heatmaps in Figures FIGREF19 , FIGREF20 and FIGREF21 show the translation probabilities into female, male and neutral pronouns, respectively, for each pair of language and category (blue is INLINEFORM0 and red is INLINEFORM1 ). Both axes are sorted in these Figures, which helps us visualize both languages and categories in an spectrum of increasing male/female/neutral translation tendencies. In agreement with suggested stereotypes, BIBREF32 STEM fields are second only to Legal ones in the prominence of male defaults. These two are followed by Arts & Entertainment and Corporate, in this order, while Healthcare, Production and Education lie on the opposite end of the spectrum.",
"Our analysis is not truly complete without tests for statistical significant differences in the translation tendencies among female, male and gender neutral pronouns. We want to know for which languages and categories does Google Translate translate sentences with significantly more male than female, or male than neutral, or neutral than female, pronouns. We ran one-sided t-tests to assess this question for each pair of language and category and also totaled among either languages or categories. The corresponding p-values are presented in Tables TABREF22 , TABREF23 , TABREF24 respectively. Language-Category pairs for which the null hypothesis was not rejected for a confidence level of INLINEFORM0 are highlighted in blue. It is important to note that when the null hypothesis is accepted, we cannot discard the possibility of the complementary null hypothesis being rejected. For example, neither male nor female pronouns are significantly more common for Healthcare positions in the Estonian language, but female pronouns are significantly more common for the same category in Finnish and Hungarian. Because of this, Language-Category pairs for which the complementary null hypothesis is rejected are painted in a darker shade of blue (see Table TABREF22 for the three examples cited above.",
"Although there is a noticeable level of variation among languages and categories, the null hypothesis that male pronouns are not significantly more frequent than female ones was consistently rejected for all languages and all categories examined. The same is true for the null hypothesis that male pronouns are not significantly more frequent than gender neutral pronouns, with the one exception of the Basque language (which exhibits a rather strong tendency towards neutral pronouns). The null hypothesis that neutral pronouns are not significantly more frequent than female ones is accepted with much more frequency, namely for the languages Malay, Estonian, Finnish, Hungarian, Armenian and for the categories Farming & Fishing & Forestry, Healthcare, Legal, Arts & Entertainment, Education. In all three cases, the null hypothesis corresponding to the aggregate for all languages and categories is rejected. We can learn from this, in summary, that Google Translate translates male pronouns more frequently than both female and gender neutral ones, either in general for Language-Category pairs or consistently among languages and among categories (with the notable exception of the Basque idiom)."
],
[
"We have taken the care of experimenting with a fair amount of different gender neutral languages. Because of that, another sensible way of coalescing our data is by language groups, as shown in Table TABREF25 . This can help us visualize the effect of different cultures in the genesis – or lack thereof – of gender bias. Nevertheless, the barplots in Figure FIGREF26 are perhaps most useful to identifying the difficulty of extracting a gender pronoun when translating from certain languages. Basque is a good example of this difficulty, although the quality of Bengali, Yoruba, Chinese and Turkish translations are also compromised."
],
[
"We queried the 1000 most frequently used adjectives in English, as classified in the COCA corpus [https://corpus.byu.edu/coca/], but since not all of them were readily applicable to the sentence template we used, we filtered the N adjectives that would fit the templates and made sense for describing a human being. The list of adjectives extracted from the corpus is available on the Github repository: https://github.com/marceloprates/Gender-Bias.",
"Apart from occupations, which we have exhaustively examined by collecting labor data from the U.S. Bureau of Labor Statistics, we have also selected a small subset of adjectives from the Corpus of Contemporary American English (COCA) https://corpus.byu.edu/coca/, in an attempt to provide preliminary evidence that the phenomenon of gender bias may extend beyond the professional context examined in this paper. Because a large number of adjectives are not applicable to human subjects, we manually curated a reasonable subset of such words. The template used for adjectives is similar to that used for occupations, and is provided again for reference in Table TABREF4 .",
"Once again the data points towards male defaults, but some variation can be observed throughout different adjectives. Sentences containing the words Shy, Attractive, Happy, Kind and Ashamed are predominantly female translated (Attractive is translated as female and gender-neutral in equal parts), while Arrogant, Cruel and Guilty are disproportionately translated with male pronouns (Guilty is in fact never translated with female or neutral pronouns)."
],
[
"A sensible objection to the conclusions we draw from our study is that the perceived gender bias in Google Translate results stems from the fact that possibly female participation in some job positions is itself low. We must account for the possibility that the statistics of gender pronouns in Google Translate outputs merely reflects the demographics of male-dominated fields (male-dominated fields can be considered those that have less than 25% of women participation BIBREF20 , according to the U.S. Department of Labor Women's Bureau). In this context, the argument in favor of a critical revision of statistic translation algorithms weakens considerably, and possibly shifts the blame away from these tools.",
"The U.S. Bureau of Labor Statistics data summarized in Table TABREF3 contains statistics about the percentage of women participation in each occupation category. This data is also available for each individual occupation, which allows us to compute the frequency of women participation for each 12-quantile. We carried the same computation in the context of frequencies of translated female pronouns, and the resulting histograms are plotted side-by-side in Figure FIGREF29 . The data shows us that Google Translate outputs fail to follow the real-world distribution of female workers across a comprehensive set of job positions. The distribution of translated female pronouns is consistently inversely distributed, with female pronouns accumulating in the first 12-quantile. By contrast, BLS data shows that female participation peaks in the fourth 12-quantile and remains significant throughout the next ones.",
"Averaged over occupations and languages, sentences are translated with female pronouns INLINEFORM0 of the time. In contrast, the gender participation frequency for female workers averaged over all occupations in the BLS report yields a consistently larger figure of INLINEFORM1 . The variance reported for the translation results is also lower, at INLINEFORM2 in contrast with the report's INLINEFORM3 . We ran an one-sided t-test to evaluate the null hypothesis that the female participation frequency is not significantly greater then the GT female pronoun frequency for the same job positions, obtaining a p-value INLINEFORM4 vastly inferior to our confidence level of INLINEFORM5 and thus rejecting H0 and concluding that Google Translate's female translation frequencies sub-estimates female participation frequencies in US job positions. As a result, it is not possible to understand this asymmetry as a reflection of workplace demographics, and the prominence of male defaults in Google Translate is, we believe, yet lacking a clear justification."
],
[
"At the time of the writing up this paper, Google Translate offered only one official translation for each input word, along with a list of synonyms. In this context, all experiments reported here offer an analysis of a “screenshot” of that tool as of August 2018, the moment they were carried out. A preprint version of this paper was posted the in well-known Cornell University-based arXiv.org open repository on September 6, 2018. The manuscript soon enjoyed a significant amount of media coverage, featuring on The Register BIBREF33 , Datanews BIBREF34 , t3n BIBREF35 , among others, and more recently on Slator BIBREF36 and Jornal do Comercio BIBREF37 . On December 6, 2018 the company's policy changed, and a statement was released detailing their efforts to reduce gender bias on Google Translate, which included a new feature presenting the user with a feminine as well as a masculine official translation (Figure FIGREF30 ). According to the company, this decision is part of a broader goal of promoting fairness and reducing biases in machine learning. They also acknowledged the technical reasons behind gender bias in their model, stating that:",
"Google Translate learns from hundreds of millions of already-translated examples from the web. Historically, it has provided only one translation for a query, even if the translation could have either a feminine or masculine form. So when the model produced one translation, it inadvertently replicated gender biases that already existed. For example: it would skew masculine for words like “strong” or “doctor,” and feminine for other words, like “nurse” or “beautiful.”",
"Their statement is very similar to the conclusions drawn on this paper, as is their motivation for redesigning the tool. As authors, we are incredibly happy to see our vision and beliefs align with those of Google in such a short timespan from the initial publishing of our work, although the company's statement does not cite any study or report in particular and thus we cannot know for sure whether this paper had an effect on their decision or not. Regardless of whether their decision was monocratic, guided by public opinion or based on published research, we understand it as an important first step on an ongoing fight against algorithmic bias, and we praise the Google Translate team for their efforts.",
"Google Translate's new feminine and masculine forms for translated sentences exemplifies how, as this paper also suggests, machine learning translation tools can be debiased, dropping the need for resorting to a balanced training set. However, it should be noted that important as it is, GT's new feature is still a first step. It does not address all of the shortcomings described in this paper, and the limited language coverage means that many users will still experience gender biased translation results. Furthermore, the system does not yet have support for non-binary results, which may exclude part of their user base.",
"In addition, one should note that further evidence is mounting about the kind of bias examined in this paper: it is becoming clear that this is a statistical phenomenon independent from any proprietary tool. In this context, the research carried out in BIBREF24 presents a very convincing argument for the sensitivity of word embeddings to gender bias in the training dataset. This suggests that machine translation engineers should be especially aware of their training data when designing a system. It is not feasible to train these models on unbiased texts, as they are probably scarce. What must be done instead is to engineer solutions to remove bias from the system after an initial training, which seems to be the goal of Google Translate's recent efforts. Fortunately, as BIBREF24 also show, debiasing can be implemented with relatively low effort and modest resources. The technology to promote social justice on machine translation in particular and machine learning in general is often already available. The most significant effort which must be taken in this context is to promote social awareness on these issues so that society can be invited into the conversation."
],
[
"In this paper, we have provided evidence that statistical translation tools such as Google Translate can exhibit gender biases and a strong tendency towards male defaults. Although implicit, these biases possibly stem from the real world data which is used to train them, and in this context possibly provide a window into the way our society talks (and writes) about women in the workplace. In this paper, we suggest that and test the hypothesis that statistical translation tools can be probed to yield insights about stereotypical gender roles in our society – or at least in their training data. By translating professional-related sentences such as “He/She is an engineer” from gender neutral languages such as Hungarian and Chinese into English, we were able to collect statistics about the asymmetry between female and male pronominal genders in the translation outputs. Our results show that male defaults are not only prominent, but exaggerated in fields suggested to be troubled with gender stereotypes, such as STEM (Science, Technology, Engineering and Mathematics) occupations. And because Google Translate typically uses English as a lingua franca to translate between other languages (e.g. Chinese INLINEFORM0 English INLINEFORM1 Portuguese) BIBREF38 , BIBREF39 , our findings possibly extend to translations between gender neutral languages and non-gender neutral languages (apart from English) in general, although we have not tested this hypothesis.",
"Our results seem to suggest that this phenomenon extends beyond the scope of the workplace, with the proportion of female pronouns varying significantly according to adjectives used to describe a person. Adjectives such as Shy and Desirable are translated with a larger proportion of female pronouns, while Guilty and Cruel are almost exclusively translated with male ones. Different languages also seemingly have a significant impact in machine gender bias, with Hungarian exhibiting a better equilibrium between male and female pronouns than, for instance, Chinese. Some languages such as Yoruba and Basque were found to translate sentences with gender neutral pronouns very often, although this is the exception rather than the rule and Basque also exhibits a high frequency of phrases for which we could not automatically extract a gender pronoun.",
"In order to strengthen our results, we ran pronominal gender translation statistics against the U.S. Bureau of Labor Statistics data on the frequency of women participation for each job position. Although Google Translate exhibits male defaults, this phenomenon may merely reflect the unequal distribution of male and female workers in some job positions. To test this hypothesis, we compared the distribution of female workers with the frequency of female translations, finding no correlation between said variables. Our data shows that Google Translate outputs fail to reflect the real-world distribution of female workers, under-estimating the expected frequency. That is to say that even if we do not expect a 50:50 distribution of translated gender pronouns, Google Translate exhibits male defaults in a greater frequency that job occupation data alone would suggest. The prominence of male defaults in Google Translate is therefore to the best of our knowledge yet lacking a clear justification.",
"We think this work sheds new light on a pressing ethical difficulty arising from modern statistical machine translation, and hope that it will lead to discussions about the role of AI engineers on minimizing potential harmful effects of the current concerns about machine bias. We are optimistic that unbiased results can be obtained with relatively little effort and marginal cost to the performance of current methods, to which current debiasing algorithms in the scientific literature are a testament."
],
[
"This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001 and the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).",
"This is a pre-print of an article published in Neural Computing and Applications."
]
],
"section_name": [
"Introduction",
"Motivation",
"Assumptions and Preliminaries",
"Materials and Methods",
"Rationale for language exceptions",
"Distribution of translated gender pronouns per occupation category",
"Distribution of translated gender pronouns per language",
"Distribution of translated gender pronouns for varied adjectives",
"Comparison with women participation data across job positions",
"Discussion",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"31ba2e8b5239579b66ce0c473f45cb988daffe8f",
"9e7a52955c5b727634ac44335038c143eb480451"
],
"answer": [
{
"evidence": [
"In order to strengthen our results, we ran pronominal gender translation statistics against the U.S. Bureau of Labor Statistics data on the frequency of women participation for each job position. Although Google Translate exhibits male defaults, this phenomenon may merely reflect the unequal distribution of male and female workers in some job positions. To test this hypothesis, we compared the distribution of female workers with the frequency of female translations, finding no correlation between said variables. Our data shows that Google Translate outputs fail to reflect the real-world distribution of female workers, under-estimating the expected frequency. That is to say that even if we do not expect a 50:50 distribution of translated gender pronouns, Google Translate exhibits male defaults in a greater frequency that job occupation data alone would suggest. The prominence of male defaults in Google Translate is therefore to the best of our knowledge yet lacking a clear justification."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In order to strengthen our results, we ran pronominal gender translation statistics against the U.S. Bureau of Labor Statistics data on the frequency of women participation for each job position."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"f53d01c238205e71565d4819b2a850b63a153a45",
"f667fc5f91960825c59121b6546f059d6db99230"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 11: Percentage of female, male and neutral gender pronouns obtained for each language, averaged over all occupations detailed in Table"
],
"extractive_spans": [],
"free_form_answer": "Malay",
"highlighted_evidence": [
"FLOAT SELECTED: Table 11: Percentage of female, male and neutral gender pronouns obtained for each language, averaged over all occupations detailed in Table"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Although there is a noticeable level of variation among languages and categories, the null hypothesis that male pronouns are not significantly more frequent than female ones was consistently rejected for all languages and all categories examined. The same is true for the null hypothesis that male pronouns are not significantly more frequent than gender neutral pronouns, with the one exception of the Basque language (which exhibits a rather strong tendency towards neutral pronouns). The null hypothesis that neutral pronouns are not significantly more frequent than female ones is accepted with much more frequency, namely for the languages Malay, Estonian, Finnish, Hungarian, Armenian and for the categories Farming & Fishing & Forestry, Healthcare, Legal, Arts & Entertainment, Education. In all three cases, the null hypothesis corresponding to the aggregate for all languages and categories is rejected. We can learn from this, in summary, that Google Translate translates male pronouns more frequently than both female and gender neutral ones, either in general for Language-Category pairs or consistently among languages and among categories (with the notable exception of the Basque idiom)."
],
"extractive_spans": [
"in general",
"exception of the Basque idiom"
],
"free_form_answer": "",
"highlighted_evidence": [
"We can learn from this, in summary, that Google Translate translates male pronouns more frequently than both female and gender neutral ones, either in general for Language-Category pairs or consistently among languages and among categories (with the notable exception of the Basque idiom)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"061737ad48da74993b78283450a8171fc063e448",
"a5889814c5be126204c8062b7e6cb2a310e559c7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Templates used to infer gender biases in the translation of job occupations and adjectives to the English language."
],
"extractive_spans": [],
"free_form_answer": "17",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Templates used to infer gender biases in the translation of job occupations and adjectives to the English language."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do the authors examine the real-world distribution of female workers in the country/countries where the gender neutral languages are spoken?",
"Which of the 12 languages showed the strongest tendency towards male defaults?",
"How many different sentence constructions are translated in gender neutral languages?"
],
"question_id": [
"c8cf20afd75eb583aef70fcb508c4f7e37f234e1",
"3567241b3fafef281d213f49f241071f1c60a303",
"d5d48b812576470edbf978fc18c00bd24930a7b7"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias",
"bias"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Translating sentences from a gender neutral language such as Hungarian to English provides a glimpse into the phenomenon of gender bias in machine translation. This screenshot from Google Translate shows how occupations from traditionally male-dominated fields [40] such as scholar, engineer and CEO are interpreted as male, while occupations such as nurse, baker and wedding organizer are interpreted as female.",
"Table 1: Gender neutral languages supported by Google Translate. Languages are grouped according to language families and classified according to whether they enforce any kind of mandatory gender (male/female) demarcation on simple phrases (X: yes, 5: never, O: some). For the purposes of this work, we have decided to work only with languages lacking such demarcation. Languages colored in red have been omitted for other reasons. See Section",
"Table 2: Selected occupations obtained from the U.S. Bureau of Labor Statistics https://www.bls.gov/cps/cpsaat11.htm, grouped by category. We obtained a total of 1019 occupations from 22 distinct categories. We have further grouped them into broader groups (or super-categories) to ease analysis and visualization.",
"Table 3: Templates used to infer gender biases in the translation of job occupations and adjectives to the English language.",
"Table 4: A randomly selected example subset of thirty occupations obtained from our dataset with a total of 1019 different occupations.",
"Table 5: Curated list of 21 adjectives obtained from the top one thousand most frequent words in this category in the Corpus of Contemporary American English (COCA)",
"Table 6: Percentage of female, male and neutral gender pronouns obtained for each BLS occupation category, averaged over all occupations in said category and tested languages detailed in Table",
"Table 7: Percentage of female, male and neutral gender pronouns obtained for each of the merged occupation category, averaged over all occupations in said category and tested languages detailed in Table",
"Figure 2: The data for the number of translated female pronouns per merged occupation category totaled among languages suggests and inverse distribution. STEM fields are nearly exclusively concentrated at X = 0, while more evenly distributed in fields such as production and healthcare (See Table",
"Figure 3: In contrast to Figure",
"Figure 4: The scarcity of gender-neutral pronouns is manifest in their histogram. Once again, STEM fields are predominantly concentrated at X = 0.",
"Figure 5: Histograms for the distribution of the number of translated female, male and gender neutral pronouns totaled among languages are plotted side by side for job occupations in the STEM (Science, Technology, Engineering and Mathematics) field, in which male defaults are the second-to-most prominent (after Legal).",
"Figure 6: Histograms for the distribution of the number of translated female, male and gender neutral pronouns totaled among languages are plotted side by side for job occupations in the Healthcare field, in which male defaults are least prominent.",
"Figure 7: Bar plots show how much of the distribution of translated gender pronouns for each occupation category (grouped as in Table 7) is composed of female, male and neutral terms. Legal and STEM fields exhibit a predominance of male defaults and contrast with Healthcare and Education, with a larger proportion of female and neutral pronouns. Note that in general the bars do not add up to 100%, as there is a fair amount of translated sentences for which we cannot obtain a gender pronoun. Categories are sorted with respect to the proportions of male, female and neutral translated pronouns respectively",
"Figure 8: Heatmap for the translation probability into female pronouns for each pair of language and occupation category. Probabilities range from 0% (blue) to 100% (red), and both axes are sorted in such a way that higher probabilities concentrate on the bottom right corner.",
"Figure 9: Heatmap for the translation probability into male pronouns for each pair of language and occupation category. Probabilities range from 0% (blue) to 100% (red), and both axes are sorted in such a way that higher probabilities concentrate on the bottom right corner.",
"Figure 10: Heatmap for the translation probability into gender neutral pronouns for each pair of language and occupation category. Probabilities range from 0% (blue) to 100% (red), and both axes are sorted in such a way that higher probabilities concentrate on the bottom right corner.",
"Table 8: Computed p-values relative to the null hypothesis that the number of translated male pronouns is not significantly greater than that of female pronouns, organized for each language and each occupation category. Cells corresponding to the acceptance of the null hypothesis are marked in blue, and within those cells, those corresponding to cases in which the complementary null hypothesis (that the number of female pronouns is not significantly greater than that of male pronouns) was rejected are marked with a darker shade of the same color. A significance level of α = .05 was adopted. Asterisks indicate cases in which all pronouns are translated with gender neutral pronouns.",
"Table 9: Computed p-values relative to the null hypothesis that the number of translated male pronouns is not significantly greater than that of gender neutral pronouns, organized for each language and each occupation category. Cells corresponding to the acceptance of the null hypothesis are marked in blue, and within those cells, those corresponding to cases in which the complementary null hypothesis (that the number of gender neutral pronouns is not significantly greater than that of male pronouns) was rejected are marked with a darker shade of the same color. A significance level of α = .05 was adopted. Asterisks indicate cases in which all pronouns are translated with gender neutral pronouns.",
"Table 10: Computed p-values relative to the null hypothesis that the number of translated gender neutral pronouns is not significantly greater than that of female pronouns, organized for each language and each occupation category. Cells corresponding to the acceptance of the null hypothesis are marked in blue, and within those cells, those corresponding to cases in which the complementary null hypothesis (that the number of female pronouns is not significantly greater than that of gender neutral pronouns) was rejected are marked with a darker shade of the same color. A significance level of α = .05 was adopted. Asterisks indicate cases in which all pronouns are translated with gender neutral pronouns.",
"Table 11: Percentage of female, male and neutral gender pronouns obtained for each language, averaged over all occupations detailed in Table",
"Figure 11: The distribution of pronominal genders per language also suggests a tendency towards male defaults, with female pronouns reaching as low as 0.196% and 1.865% for Japanese and Chinese respectively. Once again not all bars add up to 100% , as there is a fair amount of translated sentences for which we cannot obtain a gender pronoun, particularly in Basque. Among all tested languages, Basque was the only one to yield more gender neutral than male pronouns, with Bengali and Yoruba following after in this order. Languages are sorted with respect to the proportions of male, female and neutral translated pronouns respectively.",
"Table 12: Number of female, male and neutral pronominal genders in the translated sentences for each selected adjective.",
"Figure 12: The distribution of pronominal genders for each word in Table",
"Figure 13: Women participation (%) data obtained from the U.S. Bureau of Labor Statistics allows us to assess whether the Google Translate bias towards male defaults is at least to some extent explained by small frequencies of female workers in some job positions. Our data does not make a very good case for that hypothesis: the total frequency of translated female pronouns (in blue) for each 12-quantile does not seem to respond to the higher proportion of female workers (in yellow) in the last quantiles.",
"Figure 14: Comparison between the GUI of Google Translate before (left) and after (right) the introduction of the new feature intended to promote gender fairness in translation. The results described in this paper relate to the older version."
],
"file": [
"4-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"10-Table6-1.png",
"11-Table7-1.png",
"12-Figure2-1.png",
"12-Figure3-1.png",
"13-Figure4-1.png",
"14-Figure5-1.png",
"14-Figure6-1.png",
"15-Figure7-1.png",
"16-Figure8-1.png",
"17-Figure9-1.png",
"18-Figure10-1.png",
"19-Table8-1.png",
"20-Table9-1.png",
"21-Table10-1.png",
"22-Table11-1.png",
"23-Figure11-1.png",
"24-Table12-1.png",
"25-Figure12-1.png",
"26-Figure13-1.png",
"28-Figure14-1.png"
]
} | [
"Which of the 12 languages showed the strongest tendency towards male defaults?",
"How many different sentence constructions are translated in gender neutral languages?"
] | [
[
"1809.02208-22-Table11-1.png",
"1809.02208-Distribution of translated gender pronouns per occupation category-7"
],
[
"1809.02208-8-Table3-1.png"
]
] | [
"Malay",
"17"
] | 253 |
1802.03052 | WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference | Developing methods of automated inference that are able to provide users with compelling human-readable justifications for why the answer to a question is correct is critical for domains such as science and medicine, where user trust and detecting costly errors are limiting factors to adoption. One of the central barriers to training question answering models on explainable inference tasks is the lack of gold explanations to serve as training data. In this paper we present a corpus of explanations for standardized science exams, a recent challenge task for question answering. We manually construct a corpus of detailed explanations for nearly all publicly available standardized elementary science question (approximately 1,680 3rd through 5th grade questions) and represent these as"explanation graphs"-- sets of lexically overlapping sentences that describe how to arrive at the correct answer to a question through a combination of domain and world knowledge. We also provide an explanation-centered tablestore, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations. Together, these two knowledge resources map out a substantial portion of the knowledge required for answering and explaining elementary science exams, and provide both structured and free-text training data for the explainable inference task. | {
"paragraphs": [
[
"Question answering (QA) is a high-level natural language processing task that requires automatically providing answers to natural language questions. The approaches used to construct QA solvers vary depending on the questions and domain, from inference methods that attempt to construct answers from semantic, syntactic, or logical decompositions, to retrieval methods that work to identify passages of text likely to contain the answer in large corpora using statistical methods. Because of the difficulty of this task, overall QA task performance tends to be low, with generally between 20% and 80% of natural (non-artificially generated) questions answered correctly, depending on the questions, the domain, and the knowledge and inference requirements.",
"Standardized science exams have recently been proposed as a challenge task for question answering BIBREF0 , as these questions have very challenging knowledge and inference requirements BIBREF1 , BIBREF2 , but are expressed in simple-enough language that the linguistic challenges are likely surmountable in the near-term. They also provide a standardized comparison of modern inference techniques against human performance, with individual QA solvers generally answering between 40% to 50% of multiple choice science questions correctly BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , and top-performing ensemble models nearly reaching a passing grade of 60% on middle school (8th grade) science exams during a recent worldwide competition of 780 teams sponsored by the Allen Institute for AI BIBREF8 .",
"One of the central shortcomings of question answering models is that while solvers are steadily increasing the proportion of questions they answer correctly, most solvers generally lack the capacity to provide human-readable explanations or justifications for why those answers are correct. This “explainable inference” task is seen as a limitation of current machine learning models in general (e.g. Ribeiro et al., Ribeiro2016), but is critical for domains such as science or medicine where user trust and detecting potentially costly errors are important. More than this, evidence from the cognitive and pedagogy literature suggests that explanations (when tutoring others) and self-explanations (when engaged in self-directed learning) are an important aspect of learning, helping humans better generalize the knowledge they have learned BIBREF9 , BIBREF10 , BIBREF11 . This suggests that explainable methods of inference may not only be desirable for users, but may be a requirement for automated systems to have human-like generalization and inference capabilities.",
"Building QA solvers that generate explanations for their answers is a challenging task, requiring a number of inference capacities. Central among these is the idea of information aggregation, or the idea that explanations for a given question are rarely found in a contiguous passage of text, and as such inference methods must generally assemble many separate pieces of knowledge from different sources in order to arrive at a correct answer. Previous estimates BIBREF2 suggest elementary science questions require an average of 4 pieces of knowledge to answer and explain those answers (here our analysis suggests this is closer to 6), but inference methods tend to have difficulty aggregating more than 2 pieces of knowledge from free-text together due to the semantic or contextual “drift” associated with this aggregation BIBREF12 . Because of the difficulty in assembling training data for the information aggregation task, some have approached explanation generation as a distant supervision problem, with explanation quality modelled as a latent variable BIBREF7 , BIBREF13 . While these techniques have had some success in constructing short explanations, semantic drift likely limits the viability of this technique for explanations requiring more than two pieces of information to be aggregated.",
"To address this, here we construct a large corpus of explanation graphs (see Figure 1 ) to serve as training data for explainable inference tasks. The contributions of this work are are:"
],
[
"In terms of question answering, the ability to provide compelling human-readable explanations for answers to questions has been proposed as a complementary metric to assess QA performance alongside the proportion of questions answered correctly. Jansen et al. jansen2017framing developed a QA system for elementary science that answers questions by building and ranking explanation graphs built from aggregating multiple sentences read from free text corpora, including study guides and dictionaries. Because of the difficulty in constructing gold explanations to serve as training data, the explanations built with this system were constructed by modeling explanation quality as a latent variable machine learning problem. First, sentences were decomposed into sentence graphs based on clausal and prepositional boundaries, then assembled into multi-sentence “explanation graphs”. Questions were answered by ranking these candidate explanation graphs, using answer correctness as well as features that capture the connectivity of key-terms in the graphs as a proxy for explanation quality. Jansen at al. jansen2017framing showed that it is possible to learn to generate high quality explanations for 60% of elementary science questions using this method, an increase of 15% over a baseline that retrieved single continuous passages of text as answer justifications. Critically, in their error analysis Jansen et al. found that for questions answered incorrectly by their system, nearly half had successfully generated high-quality explanation graphs and ranked these highly, though they were not ultimately selected. They suggest that the process of building and ranking explanations would be aided by developing more expensive second-pass reranking processes that are able to better recognize the components and structure of high-quality explanations within a short list of candidates.",
"Knowledge bases of tables, or “table stores”, have recently been proposed as a semi-structured knowledge formalism for question answering that balances the cost of manually crafting highly-structured knowledge bases with the difficulties in acquiring this knowledge from free text BIBREF14 , BIBREF15 , BIBREF16 . The methods for question answering over tables generally take the form of constructing chains of multiple table rows that lead from terms in the question to terms in the answer, while the tables themselves are generally either collected from the web, automatically generated by extracting relations from free text, or manually constructed.",
"At the collection end of the spectrum, Pasupat and Liang pasupat:2015 extract 2,108 HTML tables from Wikipedia, and propose a method of answering these questions by reasoning over the tables using formal logic. They also introduce the WikiTableQuestions dataset, a set of 22,033 question-answer pairs (such as “Greece held its last Summer Olympics during which year?”) that can be answered using these tables. Demonstrating the ability for collection at scale, Sun et al. sun:2016table extract a total of 104 million tables from Wikipedia and the web, and develop a model that constructs relational chains between table rows using a deep-learning framework. Using their system and table store, Sun et al. demonstrate state-of-the-art performance on several benchmark datasets, including WebQuestions BIBREF17 , a set of popular questions asked from the web designed to be answerable using the large structured knowledge graph Freebase (e.g. “What movies does Morgan Freeman star in?”).",
"In terms of automatic generation, though relations are often represented as $<subject, relation, argument>$ triples, Yin et al. yin:2015answering create a large table containing 120M n-tuple relations using OpenIE BIBREF18 , arguing that the extra expressivity afforded by these more detailed relations allows their system to answer more complex questions. Yin et al. use this to successfully reason over the WebQuestions dataset, as well as their own set of questions with more complex prepositional and adverbial constraints.",
"Elementary science exams contain a variety of complex and challenging inference problems BIBREF1 , BIBREF2 , with nearly 70% of questions requiring some form of causal, process, or model-based reasoning to solve and produce an explanation for. In spite of these exams being taken by millions of students each year, elementary students tend not to be fast or voluminous readers by adult standards, making this a surprisingly low-resource domain for grade-appropriate study guides and other materials. The questions also tend to require world knowledge expressed in grade-appropriate language (like that bears have fur and that fur keeps animals warm) to solve. Because of these requirements and limitations, table stores for elementary science QA tend to be manually or semi-automatically constructed, and comparatively small.",
"Khashabi et al. Khashabi:2016TableILP provide the largest elementary science table store to date, containing approximately 5,000 manually-authored rows across 65 tables based on science curriculum topics obtained from study guides and a small corpus of questions. Khashabi et al. also augment their tablestore with 4 tables containing 2,600 automatically generated table rows using OpenIE triples. Reasoning is accomplished using an integer-linear programming algorithm to chain table rows, with Khashabi et al. reporting that an average of 2 table rows are used to answer each question. Evaluation on a small set of 129 science questions achieved passing performance (61%), with an ablation study showing that the bulk of their model's performance was from the manually authored tables.",
"To help improve the quality of automatically generated tables, Dalvi et al. Dalvi2016IKE introduce an interactive tool for semi-automatic table generation that allows annotators to query patterns over large corpora. They demonstrate that this tool can improve the speed of knowledge generation by up to a factor of 4 over manual methods, while increasing the precision and utility of the tables up to seven fold compared to completely automatic methods.",
"All of the above systems share the commonality that they work to connect (or aggregate) multiple pieces of knowledge that, through a variety of inference methods, move towards the goal of answering questions. Fried et al. fried2015higher report that information aggregation for QA is currently very challenging, with few methods able to combine more than two pieces of knowledge before succumbing to semantic drift, or the phenomenon of two pieces of knowledge being erroneously connected due to shared lexical overlap, incomplete word-sense disambiguation, or other noisy signals (e.g. erroneously aggregating a sentence about Apple computers to an inference when working to determine whether apples are a kind of fruit). In a generating a corpus of natural-language explanations for 432 elementary science questions, Jansen et al. jansen2016:COLING found that the average question requires aggregating 4 separate pieces of knowledge to explainably answer, with some questions requiring much longer explanations.",
"Though few QA solvers explicitly report the aggregation limits of their algorithms, Fried et al. fried2015higher, Khabashi et al. Khashabi:2016TableILP and Jansen et al. jansen2017framing appear to show limits or substantial decreases in performance after aggregating two pieces of knowledge. To the best of our knowledge, of systems that use information aggregation, only Jansen et al. jansen2017framing explicitly rate the explanatory performance of the justifications from their model, with good explanations generated for only 60% of correctly answered questions. Taken together, all of this suggests that performance on information aggregation and explainable question answering is still far from human performance, and could substantially benefit from a large corpus of training data for these tasks."
],
[
"We began with the following design goals:",
"Computable explanations: Explanations should be represented at different levels of structure (explanation, then sentences, then relations within sentences). The knowledge links between explanation sentences should be explicit through lexical overlap, which can be used to form an “explanation graph” that describes how each sentence is linked in an explanation.",
"Depth: Sufficient knowledge should be present in explanations such that that the answer could be arrived at with little extra domain or world knowledge – i.e. where possible, explanations should be targeted at the level of knowledge of a 5-year old child, or lower (see below for a more detailed discussion of explanatory depth).",
"Reuse: Where possible, knowledge should be re-used across explanations to facilitate automated analysis of knowledge use, and identifying common explanation patterns across questions."
],
[
"The level of knowledge required to convincingly explain why an answer to a question is correct depends upon one's familiarity with the domain of the question. For a domain expert (such as an elementary science teacher), a convincing explanation to why thick bark is the correct answer to ”Which characteristic could best help a tree survive the heat of a forest fire?” might need only take the form of explaining that one of bark's primary functions is to provide protection for the tree. In contrast, for a domain novice, such as an elementary science student, this explanation might need to be elaborated to include more knowledge to make this inference, such as that thicker things tend to provide more protection. Here we identify four coarse levels of increasing explanatory knowledge depth, shown in Table 1 .",
"For training explainable inference systems, a high level of explanatory depth is likely required. As such, in this work we target authoring explanations between the levels of young child and first principles. Pragmatically, in spite of their ultimate utility for training inference systems, building explanations too close to first principles becomes laborious and challenging for annotators given the level of abstraction and the large amount of implicit world knowledge that must be enumerated, and we leave developing protocols and methods for building such detailed explanations for future work."
],
[
"We describe our representations, tools, and annotation process below."
],
[
"We author explanation graphs for a corpus of 2,201 elementary science questions (3rd through 5th grade) from the AI2 Science Questions V2 corpus, consisting of both standardized exam questions from 12 US states, as well as the separate AI2 Science Questions Mercury dataset, a set of questions licensed from a student assessment entity. Each question is a 4-way multiple choice question, and only those questions that do not involve diagram interpretation (a separate spatial task) are included. Approximately 20% of explanations required specialized domain knowledge (for example, spatial or mathematical knowledge) that did not easily lend itself to explanation using our formalism, resulting in a corpus of 1,680 questions and explanations."
],
[
"Explanations for a given question consist of a set of sentences, each of which is on a single topic and centered around a particular kind of relation, such as water is a kind of liquid (a taxonomic relation), or melting means changing from a solid to a liquid through the addition of heat energy (a change relation).",
"Each explanation sentence is represented as a single row from a semi-structured table defined around a particular relation. Our tablestore includes 62 such tables, each centered around a particular relation such as taxonomy, meronymy, causality, changes, actions, requirements, or affordances, and a number of tables specified around specific properties, such as average lifespans of living things, the magnetic properties of materials, or the nominal durations of certain processes (like the Earth orbiting the Sun). The initial selection of table relations was drawn from a list of 21 common relations required for science explanations identified by Jansen et al. jansen2016:COLING on a smaller corpus, and expanded as new knowledge types were identified. Subsets of example tables are included in Figure 2 . Each explanation in this corpus contains an average of 6.3 rows.",
"Fine-grained column structure: In tabular representations, columns represent specific roles or arguments to a specific relation (such as X is when Y changes from A to B using mechanism C). In our tablestore we attempt to minimize the amount of information per cell, instead favouring tables with many columns that explicitly identify common roles, conditions, or other relations. This finer-grained structure eases the annotator's cognitive load when authoring new rows, while also better compartmentalizing the relational knowledge in each row for inference algorithms. The tables in our tablestore contain between 2 and 16 content columns, as compared to 2 to 5 columns for the Ariso tablestore BIBREF5 .",
"Natural language sentences: QA models use a variety of different representations for inference, from semantic roles and syntactic dependencies to discourse and embeddings. Following Khashabi et al. Khashabi:2016TableILP, we make use of a specific form of table representation that includes “filler” columns that allow each row to be directly read off as a stand-alone natural language sentence, and serve as input to any model. Examples of these filler columns can be seen in Figure 2 ."
],
[
"Explanations for a given question here take the form of a list of sentences, where each sentence is a reference to a specific table row in the table store. To increase their utility for knowledge and inference analyses, we require that each sentence in an explanation be explicitly lexically connected (i.e. share words) with either the question, answer, or other sentences in the explanation. We call this lexically-connected set of sentences an explanation graph.",
"In our preliminary analysis, we observed that the sentences in our explanations can take on very different roles, and we hypothesize that differentiating these roles is likely important for inference algorithms. We identified four coarse roles, listed in Table 2 , and described below:",
"Central: The central concept(s) that a question is testing, such as changes of state or the coupled relationship between kinetic energy and temperature.",
"Grounding: Sentences linking generic or abstract terms in a central sentence with specific instances of those terms in the question or answer. For example, for questions about changes of state, grounding sentences might identify specific instances of liquids (such as water) or gasses (such as water vapor).",
"Background: Extra information elaborating on the topic, but that (strictly speaking) isn't required to arrive at the correct inference.",
"Lexical glue: Sentences that lexically link two concepts, such as “to add means to increase”, or “heating means adding heat”. This is an artificial category in our corpus, brought about by the need for explanation graphs to be explicitly lexically linked.",
"For each sentence in each authored explanation, we provide annotation indicating which of these four roles the sentence serves in that explanation.",
"Note that this figure also appears in an earlier workshop submission on identifying explanatory patterns BIBREF19 "
],
[
"To facilitate explanation authoring, we developed and iterated the web-based collaborative authoring tool shown in Figure 3 . The tool displays a given question to the explanation author, and allows the author to progressively build an explanation graph for that question by querying the tablestore for relevant rows based on keyword searches, as well as past explanations that are likely to contain similar content or structure (increasing consistency across explanations, while reducing annotation time). A graphical visualization of the explanation graph helps the author quickly assess gaps in the explanation content to address by highlighting lexical overlap between sentences with coloured edges and labels. The tablestore takes the form of a shared Google Sheet that the annotators populate, with each table represented as a separate tab on the sheet."
],
[
"For a given question, annotators identified the central concept the question was testing, as well as the inference required to correctly answer the question, then began progressively constructing the explanation graph. Sentences in the graph were added by querying the tablestore based on keywords, which retrieved both single sentences/table rows, as well as entire explanations that had been previously annotated. If any knowledge required to build an explanation did not exist in the tablestore, this was added to an appropriate table, then added to the explanation.",
"New tables were regularly added, most commonly for property knowledge surrounding a particular topic (e.g. whether a particular material is recyclable). Because explanations are stored as lists of unique identifiers to table rows, tables and table rows could regularly be refactored, elaborated, or entirely reorganized without requiring existing explanations to be rewritten. We found this was critical for consistency and ensuring good organization throughout corpus construction.",
"One of the central difficulties with evaluating explanation authoring is determining metrics for interannotator agreement, as many correct explanations are possible for a given question, and there are many different wordings that an annotator might choose to express a given piece of knowledge in the tablestore. Similarly, the borders between different levels of explanatory depth are fuzzy, suggesting that one annotator may express their explanation with more or less specificity than another.",
"To address these difficulties we included two methods to increase consistency. First, as a passive intervention during the explanation generation process, annotators are presented with existing explanations that can be drawn from to compose a new explanation, where these existing explanations share many of the same query terms being used to construct the new explanation. Second, as an active intervention, each explanation goes through four review passes to ensure consistency. The first two passes are completed by the original annotator, before checking a flag on the annotation tool signifying that the question is ready for external review. A second annotator then checks the question for completeness and consistency with existing explanations, and composes a list of suggested edits and revisions. The fourth and final pass is completed by the original annotator, who implements these suggested revisions. This review process is expensive, taking approximately one third of the total time required to annotate each question.",
"Each annotator required approximately 60 hours of initial training for this explanation authoring task. We found that most explanations could be constructed within 5-10 minutes, with the review process taking approximately 5 more minutes per question."
],
[
"Here we characterize three properties of the explanation corpus as they relate to developing methods of explainable inference: knowledge frequency, explanation overlap, and tablestore growth."
],
[
"The tables most frequently used to author explanations are shown in Table 3 , broken down into three broad categories identified by Jansen et al. jansen2016:COLING: retrieval types, inference-supporting types, and complex inference types. Because the design of this corpus is data driven – i.e., knowledge is generally added to a table because it is required in one or more explanations – we can calculate how frequently the rows in a given table are reused to obtain an approximate measure of the generality of that knowledge. On average, a given table row is used in 2.9 different explanations, with 1,535 rows used more than once, and 531 rows used 5 or more times. The most frequently reused row (”an animal is a kind of organism”) is used in 89 different explanations. Generic “change of state” knowledge (e.g. solids, liquids, and gasses) is also frequently reused, with each row in the StatesOfMatter table used in an average of 15.7 explanations. Usage statistics for other common tables are also provided in Table 3 ."
],
[
"One might hypothesize that questions that require similar inferences to correctly answer may also contain some of the same knowledge in their explanations, with the amount of knowledge overlap dependent upon the similarity of the questions. We plan to explore using this overlap as a method of inference that can generate new explanations by editing, merging, or expanding known explanations from similar, known questions (see Jansen jansen:akbc2017 for an initial study). For this to be possible, an explanation corpus must reach a sufficient size that a large majority of questions have substantial overlap in their explanations.",
"Figure 5 shows the proportion of questions in the corpus that have 1 or more, 2 or more, 3 or more, etc., overlapping rows in their explanations with at least one other question in the corpus. Similarly, to ground this, Figure 4 shows a visualization of questions whose explanations have 2 or more overlapping rows. For a given level of overlapping explanation sentences, Figure 5 shows that the proportion of questions with that level of overlap increases logarithmically with the number of questions.",
"This has two consequences. First, it allows us to estimate the size of corpus required to train hypothetical inference methods for the science exam domain capable of producing explanations. If a given inference method can work successfully with only minimal overlap (for example, 1 shared table row), then a training corpus of 500 explanations in this domain should be sufficient to answer 80% of questions. If an inference method requires 2 shared rows, the corpus requirements would increase to approximately 2,500 questions to answer 80% of questions. However, if an inference method requires 3 or more rows, this likely would not be possible without a corpus of at least 20,000 questions and explanations – a substantial undertaking. Second, because this relationship is strongly logarithmic, if it transfers to domains outside elementary science, it should be possible to estimate the corpus size requirements for those domains after authoring explanations for only a few hundred questions."
],
[
"Finally, we examine the growth of the tablestore as it relates to the number of questions in the corpus. Figure 6 shows a monte-carlo simulation of the number of unique tablestore rows required to author explanations for specific corpus sizes. This relationship is strongly correlated (R=0.99) with an exponential proportional decrease. For this elementary science corpus, this asymptotes at approximately 6,000 unique table rows, and 10,000 questions, providing an estimate of the upper-bound of knowledge required in this domain, and the number of unique questions that can be generated within the scope of the elementary science curriculum.",
"The caveat to this estimate is that it estimates the knowledge required for elementary science exams as they currently exist, with the natural level of variation introduced by the test designers. Questions are naturally grounded in examples, such as “Which part of an oak tree is responsible for undertaking photosynthesis?” (Answer: the leaves). While the corpus often contains a number of variations of a given question that test the same curriculum topic and have similar explanations, many more variations on these questions are possible that ground the question in different examples, like orchids, peach trees, or other plants. As such, while we believe that these estimates likely cover the core knowledge of the domain, many times that knowledge would be required to make the explanation tablestore robust to small variations in the presentation of those existing exam questions, or to novel unseen questions."
],
[
"We provide a corpus of explanation graphs for elementary science questions suitable for work in developing explainable methods of inference, and show that the knowledge frequency, explanation overlap, and tablestore growth properties of the corpus follow predictable relationships. This work is open source, with the corpus and generation tools available at http://www.cognitiveai.org/explanationbank."
],
[
"We thank the Allen Institute of Artificial Intelligence for funding this work, Peter Clark at AI2 for thoughtful discussions, and Paul Hein for assistance constructing the annotation tool."
]
],
"section_name": [
"Introduction",
"Related Work",
"Design Goals",
"Explanation Depth",
"Explanation Authoring",
"Questions",
"Tables and Table Rows",
"Explanation Graphs and Sentence Roles",
"Annotation Tool",
"Procedure and Explanation Review",
"Explanation Corpus Properties",
"Knowledge Use and Row Frequency",
"Explanation Overlap",
"Explanation Tablestore Growth",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"065913f7e431e24847696e9fff2bf4e48136a4dd",
"22a425238e9554b3ffb8d17d26c84dfef8a18d27"
],
"answer": [
{
"evidence": [
"Explanations for a given question here take the form of a list of sentences, where each sentence is a reference to a specific table row in the table store. To increase their utility for knowledge and inference analyses, we require that each sentence in an explanation be explicitly lexically connected (i.e. share words) with either the question, answer, or other sentences in the explanation. We call this lexically-connected set of sentences an explanation graph."
],
"extractive_spans": [],
"free_form_answer": "They share words.",
"highlighted_evidence": [
"To increase their utility for knowledge and inference analyses, we require that each sentence in an explanation be explicitly lexically connected (i.e. share words) with either the question, answer, or other sentences in the explanation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Explanations for a given question here take the form of a list of sentences, where each sentence is a reference to a specific table row in the table store. To increase their utility for knowledge and inference analyses, we require that each sentence in an explanation be explicitly lexically connected (i.e. share words) with either the question, answer, or other sentences in the explanation. We call this lexically-connected set of sentences an explanation graph."
],
"extractive_spans": [
"share words"
],
"free_form_answer": "",
"highlighted_evidence": [
"Explanations for a given question here take the form of a list of sentences, where each sentence is a reference to a specific table row in the table store. To increase their utility for knowledge and inference analyses, we require that each sentence in an explanation be explicitly lexically connected (i.e. share words) with either the question, answer, or other sentences in the explanation. We call this lexically-connected set of sentences an explanation graph."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"8604bcc30a9b73a3263733f4583b6acce9849124",
"b090f5e6c2d5e9e46131e24d5a437dfb6e69b9c1"
],
"answer": [
{
"evidence": [
"Each explanation sentence is represented as a single row from a semi-structured table defined around a particular relation. Our tablestore includes 62 such tables, each centered around a particular relation such as taxonomy, meronymy, causality, changes, actions, requirements, or affordances, and a number of tables specified around specific properties, such as average lifespans of living things, the magnetic properties of materials, or the nominal durations of certain processes (like the Earth orbiting the Sun). The initial selection of table relations was drawn from a list of 21 common relations required for science explanations identified by Jansen et al. jansen2016:COLING on a smaller corpus, and expanded as new knowledge types were identified. Subsets of example tables are included in Figure 2 . Each explanation in this corpus contains an average of 6.3 rows."
],
"extractive_spans": [
"62"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our tablestore includes 62 such tables, each centered around a particular relation such as taxonomy, meronymy, causality, changes, actions, requirements, or affordances, and a number of tables specified around specific properties, such as average lifespans of living things, the magnetic properties of materials, or the nominal durations of certain processes (like the Earth orbiting the Sun)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Each explanation sentence is represented as a single row from a semi-structured table defined around a particular relation. Our tablestore includes 62 such tables, each centered around a particular relation such as taxonomy, meronymy, causality, changes, actions, requirements, or affordances, and a number of tables specified around specific properties, such as average lifespans of living things, the magnetic properties of materials, or the nominal durations of certain processes (like the Earth orbiting the Sun). The initial selection of table relations was drawn from a list of 21 common relations required for science explanations identified by Jansen et al. jansen2016:COLING on a smaller corpus, and expanded as new knowledge types were identified. Subsets of example tables are included in Figure 2 . Each explanation in this corpus contains an average of 6.3 rows."
],
"extractive_spans": [
"62"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our tablestore includes 62 such tables, each centered around a particular relation such as taxonomy, meronymy, causality, changes, actions, requirements, or affordances, and a number of tables specified around specific properties, such as average lifespans of living things, the magnetic properties of materials, or the nominal durations of certain processes (like the Earth orbiting the Sun)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What does it mean for sentences to be \"lexically overlapping\"?",
"How many tables are in the tablestore?"
],
"question_id": [
"641fe5dc93611411582e6a4a0ea2d5773eaf0310",
"7d34cdd9cb1c988e218ce0fd59ba6a3b5de2024a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"semi-structured",
"semi-structured"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An example multiple choice science question, the correct answer, and a sample explanation graph for why that answer is correct. Here, the explanation graph consists of six sentences, each interconnected through lexical overlap with the question, answer, and other explanation sentences.",
"Figure 2: Examples of tables and table rows from the tablestore, grounded in an example question and explanation. Table columns define the primary roles or arguments for a given relation (e.g. process name, actor, role, etc). Unlabeled “filler” columns allow each row to be used as a stand-alone natural language sentence. Note that for clarity only 4 example rows per table are shown.3",
"Table 2: Examples of the four coarse classes of explanation sentence roles, central, grounding, background, and lexical glue.",
"Figure 3: The explanation authoring web tool. Interface components include: (1) A list of user-settable flags to assist in the annotation and quality review process; (2) Question and answer candidates; (3) Query terms for search; (4) Query results (tablestore); (5) Query results (complete explanations); (6) Current explanation being assembled; (7) Explanation graph visualization of lexical overlap within the explanation.",
"Table 3: The proportion of explanations that contain knowledge from a given table, sorted by most frequent knowledge, and broken down by the knowledge type of a given table. Tables not used in at least 3% of explanations are not shown. (P) indicates a given table describes properties, e.g. whether a given material is conductive. Average Row Frequency refers to the average number of explanations a given row from that table is used in.",
"Figure 4: Questions in this explanation corpus connected by explanation overlap. Here, nodes represent questions and their explanations, and edges between nodes represent two questions having at least 2 or more (i.e. 2+) shared rows (i.e. sentences) in their explanations, with at least one of these shared rows being labelled as having a CENTRAL role to the explanation. Topic clusters (labels) naturally emerge for questions requiring similar methods of inference, based on the shared content of their explanations.",
"Figure 6: Monte-carlo simulation showing the number of unique table rows required to explainably answer a given number of questions. The line of best fit (dashed) suggests that this is a proportional decay relationship (R2 = 0.99), asymptoting at approximately 6,000 table rows and 10,000 questions. Each point represents the average of 10,000 simulations.",
"Figure 5: Monte-carlo simulation showing the proportion of questions whose explanations overlap by 1 or more, 2 or more, 3 or more, ..., explanation sentences. The proportion increases logarithmically with the number of questions in the corpus. Each point represents the average of 100 simulations."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"5-Table2-1.png",
"6-Figure3-1.png",
"7-Table3-1.png",
"8-Figure4-1.png",
"8-Figure6-1.png",
"8-Figure5-1.png"
]
} | [
"What does it mean for sentences to be \"lexically overlapping\"?"
] | [
[
"1802.03052-Explanation Graphs and Sentence Roles-0"
]
] | [
"They share words."
] | 255 |
1809.08899 | Neural network approach to classifying alarming student responses to online assessment | Automated scoring engines are increasingly being used to score the free-form text responses that students give to questions. Such engines are not designed to appropriately deal with responses that a human reader would find alarming such as those that indicate an intention to self-harm or harm others, responses that allude to drug abuse or sexual abuse or any response that would elicit concern for the student writing the response. Our neural network models have been designed to help identify these anomalous responses from a large collection of typical responses that students give. The responses identified by the neural network can be assessed for urgency, severity, and validity more quickly by a team of reviewers than otherwise possible. Given the anomalous nature of these types of responses, our goal is to maximize the chance of flagging these responses for review given the constraint that only a fixed percentage of responses can viably be assessed by a team of reviewers. | {
"paragraphs": [
[
"Automated Essay Scoring (AES) and Automated Short Answer Scoring (ASAS) has become more prevalent among testing agencies BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . These systems are often designed to address one task and one task alone; to determine whether a written piece of text addresses a question or not. These engines were originally based on either hand-crafted features or term frequency–inverse document frequency (TF-IDF) approaches BIBREF4 . More recently, these techniques have been superseded by the combination of word-embeddings and neural networks BIBREF5 , BIBREF6 , BIBREF7 . For semantically simple responses, the accuracy of these approaches can often be greater than accuracy of human raters, however, these systems are not trained to appropriately deal with the anomalous cases in which a student writes something that elicits concern for the writer or those around them, which we simply call an `alert'. Typically essay scoring systems do not handle alerts, but rather, separate systems must be designed to process these types of responses before they are sent to the essay scoring system. Our goal is not to produce a classification, but rather to use the same methods developed in AES, ASAS and sentiment analysis BIBREF8 , BIBREF9 to identify some percentage of responses that fit patterns seen in known alerts and send them to be assessed by a team of reviewers.",
"Assessment organizations typically perform some sort of alert detection as part of doing business. In among hundreds of millions of long and short responses we find cases of alerts in which students have outlined cases of physical abuse, drug abuse, depression, anxiety, threats to others or plans to harm themselves BIBREF10 . Such cases are interesting from a linguistic, educational, statistical and psychological viewpoint BIBREF11 . While some of these responses require urgent attention, given the volume of responses many testing agencies deal with, it is not feasible to systematically review every single student response within a reasonable time-frame. The benefits of an automated system for alert detection is that we can prioritize a small percentage which can be reviewed quickly so that clients can receive alerts within some fixed time period, which is typically 24 hours. Given the prevalence of school shootings and similarly urgent situations, reducing the number of false positives can effectively speed up the review process and hence optimize our clients ability to intervene when necessary.",
"As a classification problem in data science, our problem has all the hallmarks of the most difficult problems in natural language processing (NLP) BIBREF12 ; alerts are anomalous in nature making training difficult, the data is messy in that it contains misspellings (both misused real words and incorrectly spelled words) BIBREF13 , students often use student specific language or multi-word colloquialisms BIBREF14 and the semantics of alerts can be quite complex and subtle, especially when the disturbing content is implicit rather than explicit. The responses themselves are drawn from a wide range of free-form text responses to questions and student comments from a semantically diverse range of topics, including many that are emotive in nature. For example, the semantic differences between an essay on gun-control and a student talking about getting a gun can be very subtle. Sometimes our systems include essays on emotive topics because the difference in language between such essays and alerts can be very small. Students often use phrases like “kill me now\" as hyperbole out of frustration rather than a genuine desire to end ones life, e.g., \"this test is so boring, kill me now\". To minimize false positives, the engine should attempt to evaluate context, not just operate on key words or phrases.",
"When it comes to neural network design, there are two dominant types of neural networks in NLP; convolutional neural networks (CNN) and recurrent neural networks (RNN) BIBREF15 . Since responses may be of an arbitrary length different recurrent neural networks are more appropriate tools for classifying alerts BIBREF16 . The most common types of cells used in the design of recurrent neural networks are Gated Recurrent Units (GRU)s BIBREF17 and Long-Short-Term-Memory (LSTM) units BIBREF18 . The latter were originally designed to overcome the vanishing gradient problem BIBREF19 . The GRU has some interesting properties which simplify the LSTM unit and the two types of units can give very similar results BIBREF20 . We also consider stacked versions, bidirectional variants BIBREF21 and the effect of an attention mechanism BIBREF22 . This study has been designed to guide the creation of our desired final production model, which may include higher stacking, dropouts (both regular and recurrent) and may be an ensemble of various networks tuned to different types of responses BIBREF23 . Similar comparisons of architectures have appeared in the literature BIBREF24 , BIBREF7 , however, we were not able to find similar comparisons for detecting anomalous events.",
"In section SECREF2 we outline the nature of the data we have collected, a precise definition of an alert and how we processed the data for the neural network. In section SECREF3 we outline the definition of the models we evaluate and how they are defined. In section SECREF4 we outline our methodology in determining which models perform best given representative sensitivities of the engine. We attempt to give an approximation of the importance of each feature of the final model."
],
[
"The American Institutes for Research tests up to 1.8 million students a day during peak testing periods. Over the 2016–2017 period AIR delivered 48 million online tests across America. Each test could involve a number of comments, notes and long answer free-form text responses that are considered to be a possible alerts as well as equations or other interactive items that are not considered to be possible alerts. In a single year we evaluate approximately 90 million free-form text responses which range anywhere from a single word or number to ten thousand word essays. These responses are recorded in html and embedded within an xml file along with additional information that allows our clients to identify which student wrote the response. The first step in processing such a response is to remove tags, html code and any non-text using regular expressions.",
"To account for spelling mistakes, rather than attempt to correct to a vocabulary of correctly spelled words, we constructed an embedding with a vocabulary that contains both correct and incorrectly spelled words. We do this by using standard algorithms BIBREF25 on a large corpus of student responses (approximately 160 million responses). The embedding we created reflects the imperfect manner in which students use words BIBREF26 . For example, while the words 'happems' and 'ocures' are both incorrectly spelled versions of 'happens' and 'occurs' respectively, our embedding exhibits a high cosine similarity between the word vectors of the correct and incorrect versions. The embedding we created was an embedding into 200 dimensional space with a vocabulary consisting of 1.12 million words. Using spelling dictionaries we approximate that the percentage of correctly spelled words in the vocabulary of this embedding is approximately 7%, or roughly 80,000 words, while the remaining 93% are either misspellings, made up words or words from other languages. Lastly, due to the prevalence of words that are concatenated (due to a missing space), we split up any word with a Levenstein distance that is greater than two from our vocabulary into smaller words that are in the vocabulary. This ensures that any sentence is tokenized into a list of elements, almost all of which have valid embeddings.",
"In our classification of alerts, with respect to how they are identified by the team of reviewers, we have two tiers of alerts, Tier A and Tier B. Tier A consists of true responses that are alarming and require urgent attention while Tier B consists of responses that are concerning in nature but require further review. For simplification, both types of responses are flagged as alerts are treated equivalently by the system. This means the classification we seek is binary. Table TABREF1 and Table TABREF2 outline certain subcategories of this classification in addition to some example responses.",
"The American Institutes for Research has a hand-scoring team specifically devoted to verifying whether a given response satisfies the requirements of being an alert. At the beginning of this program, we had very few examples of student responses that satisfied the above requirements, moreover, given the diverse nature of what constitutes an alert, the alerts we did have did not span all the types of responses we considered to be worthy of attention. As part of the initial data collection, we accumulated synthetic responses from the sites Reddit and Teen Line that were likely to be of interest. These were sent to the hand-scoring team and assessed as if they were student responses. The responses pulled consisted of posts from forums that we suspected of containing alerts as well as generic forums so that the engine produced did not simply classify forum posts from student responses. We observed that the manner in which the students engaged with the our essay platform in cases of alerts mimicked the way in which students used online forums in a sufficiently similar manner for the data to faithfully represent real alerts. This additional data also provided crucial examples of classes of alerts found too infrequently in student data for a valid classification. This initial data allowed us to build preliminary models and hence build better engines.",
"Since the programs inception, we have greatly expanded our collection of training data, which is summarized below in Table TABREF3 . While we have accumulated over 1.11 million essay responses, which include many types of essays over a range of essay topics, student age ranges, styles of writing as well as a multitude of types of alerts, we find that many of them are mapped to the same set of words after applying our preprocessing steps. When we disregard duplicate responses after preprocessing, our training sample consists of only 866,137 unique responses.",
"Our training sample has vastly over-sampled alerts compared with a typical responses in order to make it easier to train an engine. This also means that a typical test train split would not necessarily be useful in determining the efficacy of our models. The metric we use to evaluate the efficacy of our model is an approximation of the probability that a held-out alert is flagged if a fixed percentage of a typical population were to be flagged as potential alerts.",
"This method also lends itself to a method of approximating the number of alerts in a typical population. we use any engine produced to score a set of responses, which we call the threshold data, which consisted of a representative sample of 200,014 responses. Using these scores and given a percentage of responses we wish to flag for review, we produce a threshold value in which scores above this threshold level are considered alerts and those below are normal responses. This threshold data was scored using our best engine and the 200 responses that looked most like alerts were sent to be evaluated by our hand-scorers and while only 14 were found to be true alerts. Using the effectiveness of the model used, this suggests between 15 and 17 alerts may be in the entire threshold data set. We aggregated the estimates at various levels of sensitivity in combination with the efficacy of our best model to estimate that the rate of alerts is approximately 77 to 90 alerts per million responses. Further study is required to approximate what percentage are Tier A and Tier B."
],
[
"Since natural languages contain so many rules, it is inconceivable that we could simply list all possible combinations of words that would constitute an alert. This means that the only feasible models we create are statistical in nature. Just as mathematicians use elementary functions like polynomials or periodic functions to approximate smooth functions, recurrent neural networks are used to fit classes of sequences. Character-level language models are typically useful in predicting text BIBREF27 , speech recognition BIBREF28 and correcting spelling, in contrast it is generally accepted that semantic details are encoded by word-embedding based language models BIBREF29 .",
"Recurrent neural networks are behind many of the most recent advances in NLP. We have depicted the general structure of an unfolded recurrent unit in figure FIGREF4 . A single unit takes a sequence of inputs, denoted INLINEFORM0 below, which affects a set of internal states of the node, denoted INLINEFORM1 , to produce an output, INLINEFORM2 . A single unit either outputs a single variable, which is the output of the last node, or a sequence of the same length of the input sequence, INLINEFORM3 , which may be used as the input into another recurrent unit.",
"A layer of these recurrent units is a collection of independent units, each of which may pick up a different aspect of the series. A recurrent layer, consisting of INLINEFORM0 independent recurrent units, has the ability to take the most important/prevalent features and summarize those features in a vector of length INLINEFORM1 . When we feed the sequence of outputs of one recurrent layer into another recurrent layer, we call this a stacked recurrent layer. Analogous to the types of features observed in stacking convolutional and dense layers in convolutional neural networks BIBREF30 , it is suspected that stacking recurrent layers allows a neural network to model more semantically complex features of a text BIBREF31 , BIBREF32 .",
"The collections of variables associated with the state of the recurrent units, which are denoted INLINEFORM0 in figure FIGREF4 , and their relations between the inputs, INLINEFORM1 , and the outputs are what distinguishes simple recurrent units, GRUs and LSTM units. In our case, INLINEFORM2 is a sequence of word-vectors. The underlying formulas for gated recurrent units are specified by the initial condition INLINEFORM3 and zt = g (Wz xt + Uz ht-1 + bz),",
"rt = g (Wr xt + Ur ht-1 + br),",
"ht = zt ht-1 + zt yt,",
"yt = h (Wh xt + Uh(rt ht-1) + bh), where INLINEFORM0 denotes the element-wise product (also known as the Hadamard product), INLINEFORM1 is an input vector INLINEFORM2 is an output vector, INLINEFORM3 is and update gate, INLINEFORM4 , INLINEFORM5 is a reset gate, subscripted variables INLINEFORM6 , INLINEFORM7 and INLINEFORM8 are parameter matrices and a vector and INLINEFORM9 and INLINEFORM10 are the original sigmoid function and hyperbolic tangent functions respectively BIBREF17 .",
"The second type of recurrent unit we consider is the LSTM, which appeared in the literature before the GRU and contains more parameters BIBREF18 . It was created to address the vanishing gradient problem and differs from the gated recurrent unit in that it has more parameters, hence, may be regarded as more powerful. ft = g (Wf xt + Uf ht-1 + bf),",
"it = g (Wi xt + Ui ht-1 + bi),",
"ot = g (Wo xt + Uo ht-1 + bo),",
"ct = ft ct-1 + it yt,",
"ht = ot h(ct),",
"yt = h (Wz xt + Uz ht-1 + bz), where INLINEFORM0 is the input, INLINEFORM1 is the cell state vector, INLINEFORM2 is the forget gate, INLINEFORM3 is the input gate, INLINEFORM4 is the output gate and INLINEFORM5 is the output, INLINEFORM6 is a function of the input and previous output while subscripted variables INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are parameter matrices and a vector. Due to their power, LSTM layers are ubiquitous when dealing with NLP tasks and are being used in many more contexts than layers of GRUs BIBREF33 .",
"Given a recurrent unit, the sequence INLINEFORM0 is fed into the recurrent unit cell by cell in the order it appears, however, it was found that some recurrent networks applied to translation benefited from reversing the ordering of the sequence, so that the recurrent units are fed the vectors from last to first as opposed to first to last. Indeed, it is possible to state the most important information at the beginning of a text or at the end. The idea behind bidirectional recurrent units is that we double the number of set units and have half the units fed the sequence in the right order, while the other half of the units are fed the sequence in reverse. Due to the lack of symmetry in the relations between states, we are potentially able to model new types of sequences in this way.",
"The last mechanism we wish to test is an attention mechanism BIBREF22 . The key to attention mechanisms is that we apply weights to the sequences, INLINEFORM0 , outputted by the recurrent layer, not just the final output. This means that the attention is a function of the intermediate states of the recurrent layer as well as the final output. This may be useful when identifying when key phrases are mentioned for example. This weighted sequence is sent to a soft-max layer to create a context vector. The attention vector is then multiplied by INLINEFORM1 to produce resulting attention vector, INLINEFORM2 . We have implemented the following attention mechanism ht = ct ht,",
"ct = t,j hj,",
"i,j = (eij)(eik), where INLINEFORM0 was the output from the LSTM layer, the INLINEFORM1 are linear transformations of the INLINEFORM2 and INLINEFORM3 is the attended output, i.e., the output of the attention layer . This mechanism has been wildly successful in machine translation BIBREF34 , BIBREF35 and other tasks BIBREF36 ."
],
[
"Unlike many tasks in NLP, our goal is not to explicitly maximize accuracy. The framework is that we may only review a certain percentage of documents, given this, we want to maximize the probability than an alert will be caught. I.e., the cost of a false-positive is negligible, while we consider false negatives to be more serious. Conversely, this same information could be used to set a percentage of documents required to be read in order to have have some degree of certainty that an alert is flagged. If we encode all alerts with the value 1 and all normal documents with a value of 0, any neural network model will serve as a statistical mechanism in which an alert that was not used in training will, a priori, be given a score by the engine from a distribution of numbers between 0 and 1 which is skewed towards 1 while normal documents will also have scores from another distribution skewed towards 0. The thresholds values where we set are values in which all scores given by the engine above the cut-off are considered possible alerts while all below are considered normal. We can adjust the number of documents read, or the percentage of alerts caught by increasing or decreasing this cut-off value.",
"To examine the efficacy of each model, our methodology consisted of constructing three sets of data:",
"The idea is that we use the generic test responses to determine how each model would score the types of responses the engine would typically see. While the number of alerts in any set can vary wildly, it is assumed that the set includes both normal and alert responses in the proportions we expect in production. Our baseline model is logistic regression applied to a TF-IDF model with latent semantic analysis used to reduce the representations of words to three hundred dimensions. This baseline model performs poorly at lower thresholds and fairly well at higher thresholds.",
"To evaluate our models, we did a 5-fold validation on a withheld set of 1000 alerts. That is to say we split our set into 5 partitions of 200 alerts, each of which was used as a validation sample for a neural network trained on all remaining data. This produced five very similar models whose performance is given by the percentage of 1000 alerts that were flagged. The percentage of 1000 alerts flagged was computed for each level of sensitivity considered, as measured by the percentage of the total population flagged for potentially being an alert.",
"Each of the models had 512 recurrent units (the attention mechanisms were not recurrent), hence, in stacking and using bidirectional variants, the number of units were halved. We predominantly trained on using Keras with Tensorflow serving the back-end. The machines we used had NVIDIA Tesla K80s. Each epoch took approximately two to three hours, however, the rate of convergence was such that we could restrict our attention to the models formed in the first 20 epochs as it was clear that the metrics we assessed had converged fairly quickly given the volume of data we had. The total amount of GPU time spent on developing these models was in excess of 4000 hours.",
"To give an approximation of the effect of each of the attributes we endowed our models with, we can average over the effectiveness of each model with and without each attribute in question. It is clear that that stacking two layers of recurrent units, each with half as many cells, offers the greatest boost in effectiveness, followed by the difference in recurrent structures followed by the use of attention. Using bidirectional units seems to give the smallest increase, but given the circumstances, any positive increase could potentially save lives."
],
[
"The problem of depression and violence in our schools is one that has recently garnered high levels of media attention. This type of problem is not confined to the scope of educational research, but this type of anomaly detection is also applicable to social media platforms where there are posts that indicate potential cases of users alluding to suicide, depression, using hate-speech and engaging in cyberbullying. The program on which this study concerns is in place and has contributed to the detection an intervention of cases of depression and violence across America. This study itself has led to a dramatic increase in our ability to detect such cases.",
"We should also mention that the above results do not represent the state-of-the-art, since we were able to take simple aggregated results from the models to produce better statistics at each threshold level than our best model. This can be done in a similar manner to the work of BIBREF23 , however, this is a topic we leave for a future paper. It is also unclear as to whether traditional sentiment analysis provides additional information from which better estimates may be possible."
],
[
"I would like to thank Jon Cohen, Amy Burkhardt, Balaji Kodeswaran, Sue Lottridge and Paul van Wamelen for their support and discussions."
]
],
"section_name": [
"Introduction",
"Defining the Data",
"Recurrent Structures Considered",
"Methodology and Results",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"a0edf39c345e1169141905b9e9b46478b5fc902b",
"e783bd19fb3b27935b6053804ffd1f61123a8f1c"
],
"answer": [
{
"evidence": [
"The American Institutes for Research tests up to 1.8 million students a day during peak testing periods. Over the 2016–2017 period AIR delivered 48 million online tests across America. Each test could involve a number of comments, notes and long answer free-form text responses that are considered to be a possible alerts as well as equations or other interactive items that are not considered to be possible alerts. In a single year we evaluate approximately 90 million free-form text responses which range anywhere from a single word or number to ten thousand word essays. These responses are recorded in html and embedded within an xml file along with additional information that allows our clients to identify which student wrote the response. The first step in processing such a response is to remove tags, html code and any non-text using regular expressions.",
"The American Institutes for Research has a hand-scoring team specifically devoted to verifying whether a given response satisfies the requirements of being an alert. At the beginning of this program, we had very few examples of student responses that satisfied the above requirements, moreover, given the diverse nature of what constitutes an alert, the alerts we did have did not span all the types of responses we considered to be worthy of attention. As part of the initial data collection, we accumulated synthetic responses from the sites Reddit and Teen Line that were likely to be of interest. These were sent to the hand-scoring team and assessed as if they were student responses. The responses pulled consisted of posts from forums that we suspected of containing alerts as well as generic forums so that the engine produced did not simply classify forum posts from student responses. We observed that the manner in which the students engaged with the our essay platform in cases of alerts mimicked the way in which students used online forums in a sufficiently similar manner for the data to faithfully represent real alerts. This additional data also provided crucial examples of classes of alerts found too infrequently in student data for a valid classification. This initial data allowed us to build preliminary models and hence build better engines.",
"Since the programs inception, we have greatly expanded our collection of training data, which is summarized below in Table TABREF3 . While we have accumulated over 1.11 million essay responses, which include many types of essays over a range of essay topics, student age ranges, styles of writing as well as a multitude of types of alerts, we find that many of them are mapped to the same set of words after applying our preprocessing steps. When we disregard duplicate responses after preprocessing, our training sample consists of only 866,137 unique responses."
],
"extractive_spans": [],
"free_form_answer": "Essays collected from students from American Institutes for Research tests, Synthetic responses from Reddit and Teen Line",
"highlighted_evidence": [
"The American Institutes for Research tests up to 1.8 million students a day during peak testing periods. ",
"In a single year we evaluate approximately 90 million free-form text responses which range anywhere from a single word or number to ten thousand word essays. ",
"As part of the initial data collection, we accumulated synthetic responses from the sites Reddit and Teen Line that were likely to be of interest.",
"While we have accumulated over 1.11 million essay responses, which include many types of essays over a range of essay topics, student age ranges, styles of writing as well as a multitude of types of alerts, we find that many of them are mapped to the same set of words after applying our preprocessing steps."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The American Institutes for Research tests up to 1.8 million students a day during peak testing periods. Over the 2016–2017 period AIR delivered 48 million online tests across America. Each test could involve a number of comments, notes and long answer free-form text responses that are considered to be a possible alerts as well as equations or other interactive items that are not considered to be possible alerts. In a single year we evaluate approximately 90 million free-form text responses which range anywhere from a single word or number to ten thousand word essays. These responses are recorded in html and embedded within an xml file along with additional information that allows our clients to identify which student wrote the response. The first step in processing such a response is to remove tags, html code and any non-text using regular expressions.",
"To account for spelling mistakes, rather than attempt to correct to a vocabulary of correctly spelled words, we constructed an embedding with a vocabulary that contains both correct and incorrectly spelled words. We do this by using standard algorithms BIBREF25 on a large corpus of student responses (approximately 160 million responses). The embedding we created reflects the imperfect manner in which students use words BIBREF26 . For example, while the words 'happems' and 'ocures' are both incorrectly spelled versions of 'happens' and 'occurs' respectively, our embedding exhibits a high cosine similarity between the word vectors of the correct and incorrect versions. The embedding we created was an embedding into 200 dimensional space with a vocabulary consisting of 1.12 million words. Using spelling dictionaries we approximate that the percentage of correctly spelled words in the vocabulary of this embedding is approximately 7%, or roughly 80,000 words, while the remaining 93% are either misspellings, made up words or words from other languages. Lastly, due to the prevalence of words that are concatenated (due to a missing space), we split up any word with a Levenstein distance that is greater than two from our vocabulary into smaller words that are in the vocabulary. This ensures that any sentence is tokenized into a list of elements, almost all of which have valid embeddings."
],
"extractive_spans": [],
"free_form_answer": "Student responses to the American Institutes for Research tests.",
"highlighted_evidence": [
"The American Institutes for Research tests up to 1.8 million students a day during peak testing periods.",
"We do this by using standard algorithms BIBREF25 on a large corpus of student responses (approximately 160 million responses)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"47f7c7d76d2e9e8921874589ecf5640773e9c01b",
"eebc17d9a8c6cbce056a5d734d7e45001477e79b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5. The effect of each of the attributes we endowed our networks."
],
"extractive_spans": [],
"free_form_answer": "GRU and LSTM models with a combination of the following characteristics: bidirectional vs normal, attention vs no attention, stacked vs flat.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5. The effect of each of the attributes we endowed our networks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4. Approximations of the percentage of alerts caught by each model for each percentage allowed to be reviewed."
],
"extractive_spans": [],
"free_form_answer": "GRU, Stacked GRU, Bidirectional GRU, Bidirectional stacked GRU, GRU with attention, Stacked GRU with Attention, Bidirectional GRU with attention, Bidirectional Stacked GRU with Attention, LSTM, Stacked LSTM, Bidirectional LSTM, Bidirectional stacked LSTM, LSTM with attention, Stacked LSTM with Attention, Bidirectional LSTM with attention, Bidirectional Stacked LSTM with Attention",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. Approximations of the percentage of alerts caught by each model for each percentage allowed to be reviewed."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5eaa8c6e8bc61e1ea87894d999dc783507cbfbb6",
"8b50ceaba37c2db87cca786d4cf33e6d82cf4a15"
],
"answer": [
{
"evidence": [
"To account for spelling mistakes, rather than attempt to correct to a vocabulary of correctly spelled words, we constructed an embedding with a vocabulary that contains both correct and incorrectly spelled words. We do this by using standard algorithms BIBREF25 on a large corpus of student responses (approximately 160 million responses). The embedding we created reflects the imperfect manner in which students use words BIBREF26 . For example, while the words 'happems' and 'ocures' are both incorrectly spelled versions of 'happens' and 'occurs' respectively, our embedding exhibits a high cosine similarity between the word vectors of the correct and incorrect versions. The embedding we created was an embedding into 200 dimensional space with a vocabulary consisting of 1.12 million words. Using spelling dictionaries we approximate that the percentage of correctly spelled words in the vocabulary of this embedding is approximately 7%, or roughly 80,000 words, while the remaining 93% are either misspellings, made up words or words from other languages. Lastly, due to the prevalence of words that are concatenated (due to a missing space), we split up any word with a Levenstein distance that is greater than two from our vocabulary into smaller words that are in the vocabulary. This ensures that any sentence is tokenized into a list of elements, almost all of which have valid embeddings."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Using spelling dictionaries we approximate that the percentage of correctly spelled words in the vocabulary of this embedding is approximately 7%, or roughly 80,000 words, while the remaining 93% are either misspellings, made up words or words from other languages. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4ea30d0d5417185cdac848a4ca030a9b4cc7cfe4",
"b23746ca929384489e26bc2090c9ba2b05890d8b"
],
"answer": [
{
"evidence": [
"The idea is that we use the generic test responses to determine how each model would score the types of responses the engine would typically see. While the number of alerts in any set can vary wildly, it is assumed that the set includes both normal and alert responses in the proportions we expect in production. Our baseline model is logistic regression applied to a TF-IDF model with latent semantic analysis used to reduce the representations of words to three hundred dimensions. This baseline model performs poorly at lower thresholds and fairly well at higher thresholds."
],
"extractive_spans": [],
"free_form_answer": "Logistic regression with TF-IDF with latent semantic analysis representations",
"highlighted_evidence": [
"Our baseline model is logistic regression applied to a TF-IDF model with latent semantic analysis used to reduce the representations of words to three hundred dimensions. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The idea is that we use the generic test responses to determine how each model would score the types of responses the engine would typically see. While the number of alerts in any set can vary wildly, it is assumed that the set includes both normal and alert responses in the proportions we expect in production. Our baseline model is logistic regression applied to a TF-IDF model with latent semantic analysis used to reduce the representations of words to three hundred dimensions. This baseline model performs poorly at lower thresholds and fairly well at higher thresholds."
],
"extractive_spans": [
"logistic regression applied to a TF-IDF model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our baseline model is logistic regression applied to a TF-IDF model with latent semantic analysis used to reduce the representations of words to three hundred dimensions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0663ce8243c95ea07ae2815eb90562822680c297",
"2e5925e3013824faabb2aff56f4dd8ec3fccf669"
],
"answer": [
{
"evidence": [
"The second type of recurrent unit we consider is the LSTM, which appeared in the literature before the GRU and contains more parameters BIBREF18 . It was created to address the vanishing gradient problem and differs from the gated recurrent unit in that it has more parameters, hence, may be regarded as more powerful. ft = g (Wf xt + Uf ht-1 + bf),"
],
"extractive_spans": [],
"free_form_answer": "Recurrent neural network",
"highlighted_evidence": [
"The second type of recurrent unit we consider is the LSTM, which appeared in the literature before the GRU and contains more parameters BIBREF18 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4. Approximations of the percentage of alerts caught by each model for each percentage allowed to be reviewed."
],
"extractive_spans": [],
"free_form_answer": "GRU, LSTM",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. Approximations of the percentage of alerts caught by each model for each percentage allowed to be reviewed."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"8debb5b21328932a1c748016e1b39ee49097e3d0",
"aafc05ec64ee1f789330ec936e5e2d8f79113599"
],
"answer": [
{
"evidence": [
"Our training sample has vastly over-sampled alerts compared with a typical responses in order to make it easier to train an engine. This also means that a typical test train split would not necessarily be useful in determining the efficacy of our models. The metric we use to evaluate the efficacy of our model is an approximation of the probability that a held-out alert is flagged if a fixed percentage of a typical population were to be flagged as potential alerts."
],
"extractive_spans": [
"approximation of the probability that a held-out alert is flagged if a fixed percentage of a typical population were to be flagged as potential alerts"
],
"free_form_answer": "",
"highlighted_evidence": [
" The metric we use to evaluate the efficacy of our model is an approximation of the probability that a held-out alert is flagged if a fixed percentage of a typical population were to be flagged as potential alerts."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"6516662ae619b85c74fd1fd2fe1f9f981f0a2aca",
"5a63932095e93f78be00a683a12cfa30e8a99ffb"
],
"answer": [
{
"evidence": [
"In our classification of alerts, with respect to how they are identified by the team of reviewers, we have two tiers of alerts, Tier A and Tier B. Tier A consists of true responses that are alarming and require urgent attention while Tier B consists of responses that are concerning in nature but require further review. For simplification, both types of responses are flagged as alerts are treated equivalently by the system. This means the classification we seek is binary. Table TABREF1 and Table TABREF2 outline certain subcategories of this classification in addition to some example responses."
],
"extractive_spans": [],
"free_form_answer": "Severity is manually identified by a team of reviewers.",
"highlighted_evidence": [
"In our classification of alerts, with respect to how they are identified by the team of reviewers, we have two tiers of alerts, Tier A and Tier B. Tier A consists of true responses that are alarming and require urgent attention while Tier B consists of responses that are concerning in nature but require further review."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"14e9daaba55d4227a021833c5c79c58a39c8f86e",
"ecc3fa238319f7b3cc75ac9d317c5739de742f99"
],
"answer": [
{
"evidence": [
"In our classification of alerts, with respect to how they are identified by the team of reviewers, we have two tiers of alerts, Tier A and Tier B. Tier A consists of true responses that are alarming and require urgent attention while Tier B consists of responses that are concerning in nature but require further review. For simplification, both types of responses are flagged as alerts are treated equivalently by the system. This means the classification we seek is binary. Table TABREF1 and Table TABREF2 outline certain subcategories of this classification in addition to some example responses."
],
"extractive_spans": [],
"free_form_answer": "Urgency is manually identified by a team of reviewers.",
"highlighted_evidence": [
"In our classification of alerts, with respect to how they are identified by the team of reviewers, we have two tiers of alerts, Tier A and Tier B. Tier A consists of true responses that are alarming and require urgent attention while Tier B consists of responses that are concerning in nature but require further review."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
"",
"",
""
],
"question": [
"what dataset is used?",
"what neural network models are used?",
"Do they report results only on English data?",
"What baseline model is used?",
"What type of neural network models are used?",
"How is validity identified and what metric is used to quantify it?",
"How is severity identified and what metric is used to quantify it?",
"How is urgency identified and what metric is used to quantify it?"
],
"question_id": [
"83db51da819adf6faeb950fe04b4df942a887fb5",
"7e7471bc24970c6f23baff570be385fd3534926c",
"ec5e84a1d1b12f7185183d165cbb5eae66d9833e",
"7f958017cbb08962c80e625c2fd7a1e2375f27a3",
"4130651509403becc468bdbe973e63d3716beade",
"6edef748370e63357a57610b5784204c9715c0b4",
"6b302280522c350c4d1527d8c6ebc5b470f9314c",
"7da138ec43a88ea75374c40e8491f7975db29480"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1. Student response that meet the requirement to trigger an immediate alert notification to client.",
"Table 2. Student response contains watch words or phrases (these may be required to undergo review further).",
"Table 3. The table gives the precise number of examples used in training both before preprocessing (with possible duplicates) and after (unique responses) as well as an unclassified set we used for determining an approximation of the percentage of responses flagged by the engine at various levels of sensitivity.",
"Figure 1. When we unfold an RNN, we express it as a sequence of cell each accepting, as input, an element of the sequence. The output of the RNN is the output of the last state.",
"Figure 2. The left is an LSTM; the gates ft, it and ot are vectors of values between 0 and 1 that augment the input being passed through them by selecting which features to keep and which to discard. The input xt, ht−1, ct−1 and yt are more generically valued vectors. The GRU is based on a similar concept with a simpler design where an update gate, zt decides which features of the previous output to keep as output of the new cell and which features need to contain input specific information, which is stored in yt.",
"Table 4. Approximations of the percentage of alerts caught by each model for each percentage allowed to be reviewed.",
"Table 5. The effect of each of the attributes we endowed our networks."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"5-Figure1-1.png",
"5-Figure2-1.png",
"7-Table4-1.png",
"7-Table5-1.png"
]
} | [
"what dataset is used?",
"what neural network models are used?",
"What baseline model is used?",
"What type of neural network models are used?",
"How is severity identified and what metric is used to quantify it?",
"How is urgency identified and what metric is used to quantify it?"
] | [
[
"1809.08899-Defining the Data-4",
"1809.08899-Defining the Data-3",
"1809.08899-Defining the Data-1",
"1809.08899-Defining the Data-0"
],
[
"1809.08899-7-Table4-1.png",
"1809.08899-7-Table5-1.png"
],
[
"1809.08899-Methodology and Results-2"
],
[
"1809.08899-7-Table4-1.png",
"1809.08899-Recurrent Structures Considered-7"
],
[
"1809.08899-Defining the Data-2"
],
[
"1809.08899-Defining the Data-2"
]
] | [
"Student responses to the American Institutes for Research tests.",
"GRU, Stacked GRU, Bidirectional GRU, Bidirectional stacked GRU, GRU with attention, Stacked GRU with Attention, Bidirectional GRU with attention, Bidirectional Stacked GRU with Attention, LSTM, Stacked LSTM, Bidirectional LSTM, Bidirectional stacked LSTM, LSTM with attention, Stacked LSTM with Attention, Bidirectional LSTM with attention, Bidirectional Stacked LSTM with Attention",
"Logistic regression with TF-IDF with latent semantic analysis representations",
"GRU, LSTM",
"Severity is manually identified by a team of reviewers.",
"Urgency is manually identified by a team of reviewers."
] | 256 |
1711.11118 | Multimodal Attribute Extraction | The broad goal of information extraction is to derive structured information from unstructured data. However, most existing methods focus solely on text, ignoring other types of unstructured data such as images, video and audio which comprise an increasing portion of the information on the web. To address this shortcoming, we propose the task of multimodal attribute extraction. Given a collection of unstructured and semi-structured contextual information about an entity (such as a textual description, or visual depictions) the task is to extract the entity's underlying attributes. In this paper, we provide a dataset containing mixed-media data for over 2 million product items along with 7 million attribute-value pairs describing the items which can be used to train attribute extractors in a weakly supervised manner. We provide a variety of baselines which demonstrate the relative effectiveness of the individual modes of information towards solving the task, as well as study human performance. | {
"paragraphs": [
[
"Given the large collections of unstructured and semi-structured data available on the web, there is a crucial need to enable quick and efficient access to the knowledge content within them. Traditionally, the field of information extraction has focused on extracting such knowledge from unstructured text documents, such as job postings, scientific papers, news articles, and emails. However, the content on the web increasingly contains more varied types of data, including semi-structured web pages, tables that do not adhere to any schema, photographs, videos, and audio. Given a query by a user, the appropriate information may appear in any of these different modes, and thus there's a crucial need for methods to construct knowledge bases from different types of data, and more importantly, combine the evidence in order to extract the correct answer.",
"Motivated by this goal, we introduce the task of multimodal attribute extraction. Provided contextual information about an entity, in the form of any of the modes described above, along with an attribute query, the goal is to extract the corresponding value for that attribute. While attribute extraction on the domain of text has been well-studied BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , to our knowledge this is the first time attribute extraction using a combination of multiple modes of data has been considered. This introduces additional challenges to the problem, since a multimodal attribute extractor needs to be able to return values provided any kind of evidence, whereas modern attribute extractors treat attribute extraction as a tagging problem and thus only work when attributes occur as a substring of text.",
"In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API. The collection of items is diverse and includes categories such as electronic products, jewelry, clothing, vehicles, and real estate. For each item, we provide a textual product description, collection of images, and open-schema table of attribute-value pairs (see Figure 1 for an example). The provided attribute-value pairs only provide a very weak source of supervision; where the value might appear in the context is not known, and further, it is not even guaranteed that the value can be extracted from the provided evidence. In all, there are over 4 million images and 7.6 million attribute-value pairs. By releasing such a large dataset, we hope to drive progress on this task similar to how the Penn Treebank BIBREF5 , SQuAD BIBREF6 , and Imagenet BIBREF7 have driven progress on syntactic parsing, question answering, and object recognition, respectively.",
"To asses the difficulty of the task and the dataset, we first conduct a human evaluation study using Mechanical Turk that demonstrates that all available modes of information are useful for detecting values. We also train and provide results for a variety of machine learning models on the dataset. We observe that a simple most-common value classifier, which always predicts the most-common value for a given attribute, provides a very difficult baseline for more complicated models to beat (33% accuracy). In our current experiments, we are unable to train an image-only classifier that can outperform this simple model, despite using modern neural architectures such as VGG-16 BIBREF8 and Google's Inception-v3 BIBREF9 . However, we are able to obtain significantly better performance using a text-only classifier (59% accuracy). We hope to improve and obtain more accurate models in further research."
],
[
"Since a multimodal attribute extractor needs to be able to return values for attributes which occur in images as well as text, we cannot treat the problem as a labeling problem as is done in the existing approaches to attribute extraction. We instead define the problem as following: Given a product $i$ and a query attribute $a$ , we need to extract a corresponding value $v$ from the evidence provided for $i$ , namely, a textual description of it ( $D_i$ ) and a collection of images ( $I_i$ ). For example, in Figure 1 , we observe the image and the description of a product, and examples of some attributes and values of interest. For training, for a set of product items $\\mathcal {I}$ , we are given, for each item $i \\in \\mathcal {I}$ , its textual description $D_i$ and the images $I_i$ , and a set $a$0 comprised of attribute-value pairs (i.e. $a$1 ). In general, the products at query time will not be in $a$2 , and we do not assume any fixed ontology for products, attributes, or values. We evaluate the performance on this task as the accuracy of the predicted value with the observed value, however since there may be multiple correct values, we also include hits@ $a$3 evaluation."
],
[
"In this section, we formulate a novel extraction model for the task that builds upon the architectures used recently in tasks such as image captioning, question answering, VQA, etc. The model is composed of three separate modules: (1) an encoding module that uses modern neural architectures to jointly embed the query, text, and images into a common latent space, (2) a fusion module that combines these embedded vectors using an attribute-specific attention mechanism to a single dense vector, and (3) a similarity-based value decoder which produces the final value prediction. We provide an overview of this architecture in Figure 3 ."
],
[
"We evaluate on a subset of the MAE dataset consisting of the 100 most common attributes, covering roughly 50% of the examples in the overall MAE dataset. To determine the relative effectiveness of the different modes of information, we train image and text only versions of the model described above. Following the suggestions in BIBREF15 we use a 600 unit single layer in our text convolutions, and a 5 word window size. We apply dropout to the output of both the image and text CNNs before feeding the output through fully connected layers to obtain the image and text embeddings. Employing a coarse grid search, we found models performed best using a large embedding dimension of $k=1024$ . Lastly, we explore multimodal models using both the Concat and the GMU strategies. To evaluate models we use the hits@ $k$ metric on the values.",
"The results of our experiments are summarized in Table 1 . We include a simple most-common value model that always predicts the most-common value for a given attribute. Observe that the performance of the image baseline model is almost identical to the most-common value model. Similarly, the performance of the multimodal models is similar to the text baseline model. Thus our models so far have been unable to effectively incorporate information from the image data. These results show that the task is sufficiently challenging that even a complex neural model cannot solve the task, and thus is a ripe area for future research.",
"Model predictions for the example shown in Figure 1 are given in Table 2 , along with their similarity scores. Observe that the predictions made by the current image baseline model are almost identical to the most-common value model. This suggests that our current image baseline model is essentially ignoring all of the image related information and instead learning to predict common values."
],
[
"Our work is related to, and builds upon, a number of existing approaches.",
"The introduction of large curated datasets has driven progress in many fields of machine learning. Notable examples include: The Penn Treebank BIBREF5 for syntactic parsing models, Imagenet BIBREF7 for object recognition, Flickr30k BIBREF16 and MS COCO BIBREF17 for image captioning, SQuAD BIBREF6 for question answering and VQA BIBREF18 for visual question answering. Despite the interest in related tasks, there is currently no publicly available dataset for attribute extraction, let alone multimodal attribute extraction. This creates a high barrier to entry as anyone interested in attribute extraction must go through the expensive and time-consuming process of acquiring a dataset. Furthermore, there is no way to compare the effectiveness of different techniques. Our dataset aims to address this concern.",
"Recently, there has been renewed interest in multimodal machine learning problems. BIBREF19 demonstrate an effective image captioning system that uses a CNN to encode an image which is used as the input to an LSTM BIBREF20 decoder, producing the output caption. This encoder-decoder architecture forms the basis for successful approaches to other multimodal problems such as visual question answering BIBREF21 . Another body of work focuses on the problem of unifying information from different modes of information. BIBREF22 propose to concatenate together the output of a text-based distributional model (such as word2vec BIBREF23 ) with an encoding produced from a CNN applied to images of the word. BIBREF24 demonstrate an alternative approach to concatenation, where instead the a word embedding is learned that minimizes a joint loss function involving context-prediction and image reconstruction losses. Another alternative to concatenation is the gated multimodal unit (GMU) proposed in BIBREF13 . We investigate the performance of different techniques for combining image and text data for product attribute extraction in section \"Experiments\" .",
"To our knowledge, we are the first to study the problem of attribute extraction from multimodal data. However the problem of attribute extraction from text is well studied. BIBREF1 treat attribute extraction of retail products as a form of named entity recognition. They predefine a list of attributes to extract and train a Naïve Bayes model on a manually labeled seed dataset to extract the corresponding values. BIBREF3 build on this work by bootstrapping to expand the seed list, and evaluate more complicated models such as HMMs, MaxEnt, SVMs, and CRFs. To mitigate the introduction noisy labels when using semi-supervised techniques, BIBREF2 incorporates crowdsourcing to manually accept or reject the newly introduced labels. One major drawback of these approaches is that they require manually labelled seed data to construct the knowledge base of attribute-value pairs, which can be quite expensive for a large number of attributes. BIBREF0 address this problem by using an unsupervised, LDA-based approach to generate word classes from reviews, followed by aligning them to the product description. BIBREF4 propose to extract attribute-value pairs from structured data on product pages, such as HTML tables, and lists, to construct the KB. This is essentially the approach used to construct the knowledge base of attribute-value pairs used in our work, which is automatically performed by Diffbot's Product API."
],
[
"In order to kick start research on multimodal information extraction problems, we introduce the multimodal attribute extraction dataset, an attribute extraction dataset derived from a large number of e-commerce websites. MAE features images, textual descriptions, and attribute-value pairs for a diverse set of products. Preliminary data from an Amazon Mechanical Turk study demonstrates that both modes of information are beneficial to attribute extraction. We measure the performance of a collection of baseline models, and observe that reasonably high accuracy can be obtained using only text. However, we are unable to train off-the-shelf methods to effectively leverage image data.",
"There are a number of exciting avenues for future research. We are interested in performing a more comprehensive crowdsourcing study to identify the ways in which different evidence forms are useful, and in order to create clean evaluation data. As this dataset brings up interesting challenges in multimodal machine learning, we will explore a variety of novel architectures that are able to combine the different forms of evidence effectively to accurately extract the attribute values. Finally, we are also interested in exploring other aspects of knowledge base construction that may benefit from multimodal reasoning, such as relational prediction, entity linking, and disambiguation."
]
],
"section_name": [
"Introduction",
"Multimodal Product Attribute Extraction",
"Multimodal Fusion Model",
"Experiments",
"Related Work",
"Conclusions and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"1cd605dde3d0460710724e4f4332e89d2299daa2",
"7750e17d17f71e7ce53c49aaf94b7f4a0257b96a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"1bfdffaea2754c003af0c2bb5eb2496b9d6adfaa",
"88cb3c7a5c8808e3dbe560307b3e22b1db864dab"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"2300e73d708582c6f07b5b3b01bf936e88c0a271",
"58e62673e2bf2abfb4504e7f53d76c15750e802d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"676aab88c22de606638f417587ef6f67899470df",
"a3f94021f0bc463058d9f1ca64447bd9e1118d2f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"300d2c77894128bcb5861f7cb3b73db17767b641",
"60ecbb02dc02996ffa0e3661d0b1c892950cd36c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"0665d6ccf9f73a039e932f3b4523a09b121b90f8",
"d2ae4bb2fbe284d7e1ca197377bcecc41c904442"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: MAE dataset statistics."
],
"extractive_spans": [],
"free_form_answer": "7.6 million",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: MAE dataset statistics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"4a4ac496acc8b66e2685a3c2186b937648929cb8",
"9e4061570e09c3676d6f35480abcd7a300f9a9a7"
],
"answer": [
{
"evidence": [
"In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API. The collection of items is diverse and includes categories such as electronic products, jewelry, clothing, vehicles, and real estate. For each item, we provide a textual product description, collection of images, and open-schema table of attribute-value pairs (see Figure 1 for an example). The provided attribute-value pairs only provide a very weak source of supervision; where the value might appear in the context is not known, and further, it is not even guaranteed that the value can be extracted from the provided evidence. In all, there are over 4 million images and 7.6 million attribute-value pairs. By releasing such a large dataset, we hope to drive progress on this task similar to how the Penn Treebank BIBREF5 , SQuAD BIBREF6 , and Imagenet BIBREF7 have driven progress on syntactic parsing, question answering, and object recognition, respectively."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API. The collection of items is diverse and includes categories such as electronic products, jewelry, clothing, vehicles, and real estate. For each item, we provide a textual product description, collection of images, and open-schema table of attribute-value pairs (see Figure 1 for an example). The provided attribute-value pairs only provide a very weak source of supervision; where the value might appear in the context is not known, and further, it is not even guaranteed that the value can be extracted from the provided evidence. In all, there are over 4 million images and 7.6 million attribute-value pairs. By releasing such a large dataset, we hope to drive progress on this task similar to how the Penn Treebank BIBREF5 , SQuAD BIBREF6 , and Imagenet BIBREF7 have driven progress on syntactic parsing, question answering, and object recognition, respectively."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"7b793483f095fbf406a51b052daa0794544abf7e",
"cf4333bdc1309137d2aaac29834399209414e42d"
],
"answer": [
{
"evidence": [
"In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API. The collection of items is diverse and includes categories such as electronic products, jewelry, clothing, vehicles, and real estate. For each item, we provide a textual product description, collection of images, and open-schema table of attribute-value pairs (see Figure 1 for an example). The provided attribute-value pairs only provide a very weak source of supervision; where the value might appear in the context is not known, and further, it is not even guaranteed that the value can be extracted from the provided evidence. In all, there are over 4 million images and 7.6 million attribute-value pairs. By releasing such a large dataset, we hope to drive progress on this task similar to how the Penn Treebank BIBREF5 , SQuAD BIBREF6 , and Imagenet BIBREF7 have driven progress on syntactic parsing, question answering, and object recognition, respectively."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For each item, we provide a textual product description, collection of images, and open-schema table of attribute-value pairs (see Figure 1 for an example)."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API. The collection of items is diverse and includes categories such as electronic products, jewelry, clothing, vehicles, and real estate. For each item, we provide a textual product description, collection of images, and open-schema table of attribute-value pairs (see Figure 1 for an example). The provided attribute-value pairs only provide a very weak source of supervision; where the value might appear in the context is not known, and further, it is not even guaranteed that the value can be extracted from the provided evidence. In all, there are over 4 million images and 7.6 million attribute-value pairs. By releasing such a large dataset, we hope to drive progress on this task similar to how the Penn Treebank BIBREF5 , SQuAD BIBREF6 , and Imagenet BIBREF7 have driven progress on syntactic parsing, question answering, and object recognition, respectively."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The provided attribute-value pairs only provide a very weak source of supervision; where the value might appear in the context is not known, and further, it is not even guaranteed that the value can be extracted from the provided evidence."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How many of the attribute-value pairs are found in video?",
"How many of the attribute-value pairs are found in audio?",
"How many of the attribute-value pairs are found in images?",
"How many of the attribute-value pairs are found in semi-structured text?",
"How many of the attribute-value pairs are found in unstructured text?",
"How many different semi-structured templates are represented in the data?",
"Are all datapoints from the same website?",
"Do they consider semi-structured webpages?"
],
"question_id": [
"d5d4504f419862275a532b8e53d0ece16e0ae8d1",
"f1e70b63c45ab0fc35dc63de089c802543e30c8f",
"39d20b396f12f0432770c15b80dc0d740202f98d",
"4e0df856b39055a9ba801cc9c8e56d5b069bda11",
"bbc6d0402cae16084261f8558cebb4aa6d5b1ea5",
"a7e03d24549961b38e15b5386d9df267900ef4c8",
"036c400424357457e42b22df477b7c3cdc2eefe9",
"63eda2af88c35a507fbbfda0ec1082f58091883a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured"
],
"topic_background": [
"research",
"research",
"research",
"research",
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Figure 1: An example item with its descriptions: image, tabular attributes, and textual.",
"Table 1: MAE dataset statistics.",
"Figure 3: Basic architecture of the multimodal attribute extraction model.",
"Table 2: Baseline model results.",
"Table 3: Top 5 predictions on the data in Figure 1 when querying for color finish."
],
"file": [
"2-Figure1-1.png",
"2-Table1-1.png",
"3-Figure3-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} | [
"How many different semi-structured templates are represented in the data?"
] | [
[
"1711.11118-2-Table1-1.png"
]
] | [
"7.6 million"
] | 257 |
1903.00384 | Data-driven Approach for Quality Evaluation on Knowledge Sharing Platform | In recent years, voice knowledge sharing and question answering (Q&A) platforms have attracted much attention, which greatly facilitate the knowledge acquisition for people. However, little research has evaluated on the quality evaluation on voice knowledge sharing. This paper presents a data-driven approach to automatically evaluate the quality of a specific Q&A platform (Zhihu Live). Extensive experiments demonstrate the effectiveness of the proposed method. Furthermore, we introduce a dataset of Zhihu Live as an open resource for researchers in related areas. This dataset will facilitate the development of new methods on knowledge sharing services quality evaluation. | {
"paragraphs": [
[
"Knowledge sharing platforms such as Quora and Zhihu emerge as very convenient tools for acquiring knowledge. These question and answer (Q&A) platforms are newly emerged communities about knowledge acquisition, experience sharing and social networks services (SNS).",
"Unlike many other Q&A platforms, Zhihu platform resembles a social network community. Users can follow other people, post ideas, up-vote or down-vote answers, and write their own answers. Zhihu allows users to keep track of specific fields by following related topics, such as “Education”, “Movie”, “Technology” and “Music”. Once a Zhihu user starts to follow a specific topic or a person, the related updates are automatically pushed to the user's feed timeline.",
"Although these platforms have exploded in popularity, they face some potential problems. The key problem is that as the number of users grows, a large volume of low-quality questions and answers emerge and overwhelm users, which make users hard to find relevant and helpful information.",
"Zhihu Live is a real-time voice-answering product on the Zhihu platform, which enables the speakers to share knowledge, experience, and opinions on a subject. The audience can ask questions and get answers from the speakers as well. It allows communication with the speakers easily and efficiently through the Internet. Zhihu Live provides an extremely useful reward mechanism (like up-votes, following growth and economic returns), to encourage high-quality content providers to generate high-level information on Zhihu platform.",
"However, due to the lack of efficient filter mechanism and evaluation schemes, many users suffer from lots of low-quality contents, which affects the service negatively. Recently, studies on social Q&A platforms and knowledge sharing are rising and have achieved many promising results. Shah et al. BIBREF0 propose a data-driven approach with logistic regression and carefully designed hand-crafted features to predict the answer quality on Yahoo! Answers. Wang et al. BIBREF1 illustrate that heterogeneity in the user and question graphs are important contributors to the quality of Quora's knowledge base. Paul et al. BIBREF2 explore reputation mechanism in quora through detailed data analysis, their experiments indicate that social voting helps users identify and promote good content but is prone to preferential attachment. Patil et al. BIBREF3 propose a method to detect experts on Quora by their activity, quality of answers, linguistic characteristics and temporal behaviors, and achieves 97% accuracy and 0.987 AUC. Rughinis et al. BIBREF4 indicate that there are different regimes of engagement at the intersection of the technological infrastructure and users' participation in Quora.",
"All of these works are mainly focused on answer ranking and answer quality evaluation. But there is little research achievement about quality evaluation in voice-answering areas. In this work, we present a data-driven approach for quality evaluation about Zhihu Live, by consuming the dataset we collected to gather knowledge and insightful conclusion. The proposed data-driven approach includes data collection, storage, preprocessing, data analysis, and predictive analysis via machine learning. The architecture of our data-driven method is shown in Fig. FIGREF3 . The records are crawled from Zhihu Live official website and stored in MongoDB. Data preprocessing methods include cleaning and data normalization to make the dataset satisfy our target problem. Descriptive data analysis and predictive analysis are also conducted for deeper analysis about this dataset.",
"The main contributions of this paper are as follows: (1) We release a public benchmark dataset which contains 7242 records and 286,938 text comments about Zhihu Live. Detailed analysis about the dataset is also discussed in this paper. This dataset could help researchers verify their ideas in related fields. (2) By analyzing this dataset, we gain several insightful conclusion about Zhihu Live. (3) We also propose a multi-branched neural network (MTNet) to evaluate Zhihu Lives' scores. The superiority of our proposed model is demonstrated by comparing performance with other mainstream regressors.",
"The rest of this paper is organized as follows: Section 2 describes detailed procedures of ZhihuLive-DB collection, and descriptive analysis. Section 3 illustrates our proposed MTNet. In section 4, we give a detailed description of experiments, and the last section discusses the conclusion of this paper and future work."
],
[
"In order to make a detailed analysis about Zhihu Live with data-driven approach, the first step is to collect Zhihu Live data. Since there is no public dataset available for research and no official APIs, we develop a web spider with python requests library to crawl data from Zhihu Live official website. Our crawling strategy is breadth-first traverse (we crawl the records one by one from the given URLs, and then extract more detailed information from sub URLs). We follow the crawler-etiquette defined in Zhihu's robots.txt. So we randomly set 2 to 5 seconds pause after per crawling to prevent from being banned by Zhihu, and avoid generating abnormal traffic as well. Our spider crawls 7242 records in total. Majority of the data are embedded in Ajax calls. In addition, we also crawl 286,938 comments of these Zhihu Lives. All of the datasets are stored in MongoDB, a widely-used NoSQL database."
],
[
"The rating scores are within a range of INLINEFORM0 . We calculate min, Q1, median, Q3, max, mean, and mode about review count (see Table TABREF8 ). Because the number of received review may greatly influence the reliability of the review score. From Table TABREF8 we can see that many responses on Zhihu Live receive no review at all, which are useless for quality evaluation.",
"One of the most challenging problems is no unique standard to evaluate a Zhihu Live as a low-quality or high-quality one. A collection of people may highly praise a Zhihu Live while others may not. In order to remove the sample bias, we delete those records whose review count is less than Q1 (11). So we get 5477 records which belong to 18 different fields.",
"The statistics of review scores after deletion are shown in Table TABREF9 . The mean score of 5477 records is 4.51, and the variance is 0.16. It indicates that the majority of Zhihu Lives are of high quality, and the users' scores are relatively stable.",
"Badge in Zhihu represents identity authentication of public figures and high-quality answerers. Only those who hold a Ph.D. degree or experts in a specific domain can be granted a badge. Hence, these speakers tend to host high-quality Zhihu Lives theoretically. Table TABREF10 shows that 3286 speakers hold no badge, 1475 speakers hold 1 badge, and 446 speakers hold 2 badges, respectively. The average score of Zhihu Lives given by two badges holders is slightly higher than others. We can conclude that whether the speaker holds badges does have slightly influence on the Zhihu Live quality ratings, which is consistent with our supposition.",
"Furthermore, we calculate the average scores of different Zhihu Live types (See Table TABREF11 ). We find that Others, Art and Sports fields contain more high-quality Zhihu Lives, while Delicacy, Business and Psychology fields contain more low-quality Lives. We can conclude that the topics related to self-improvement tend to receive more positive comments.",
"There are two types of Zhihu accounts: personal and organization. From Table TABREF12 , we can see that the majority of the Zhihu Live speakers are men with personal accounts. Organizations are less likely to give presentation and share ideas upon Zhihu Live platform."
],
[
"Apart from analyzing Zhihu Live dataset, we also adopt TextRank BIBREF5 algorithm to calculate TOP-50 hot words with wordcloud visualization (see Fig. FIGREF14 ). Bigger font denotes higher weight of the word, we can see clearly that the majority of the comments show contentment about Zhihu Lives, and the audience care more about “content”, “knowledge” and “speaker”."
],
[
"We define the quality evaluation problem as a standard regression task since the scores we aim to predict are continuous values. Hence we use Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) to estimate the performance of diverse learning algorithms. MAE and RMSE are used to evaluate the fit quality of the learning algorithms, if they are close to zero, it means the learning algorithm fits the dataset well. DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 denotes the number of samples, INLINEFORM1 denotes the input feature vector of a sample INLINEFORM2 , INLINEFORM3 denotes the learning algorithm, INLINEFORM4 denotes the groundtruth score of a Zhihu Live response INLINEFORM5 .",
"The results are calculated by randomly selecting 80% in the dataset as training set, and the remaining records as test set."
],
[
"In this section, we first give a brief introduction of the neural network and then present a description of our proposed MTNet to predict the quality of responses in detail."
],
[
"Deep neural network (DNN) has aroused dramatically attention due to their extraordinary performance in computer vision BIBREF6 , BIBREF7 , speech recognition BIBREF8 and natural language processing (NLP) BIBREF9 tasks. We apply DNN to our Zhihu Live quality evaluation problem aiming to approximate a function INLINEFORM0 which can accurately predict a Zhihu Live's score.",
"In our quality evaluation task, we take multiple layer perception BIBREF8 as the basic composition block of MTNet. Since we treat the Zhihu Live quality evaluation problem as a regression task, we set the output neuron equal to 1. DNNs are trained by backpropagation algorithm BIBREF8 .",
"The calculation details of neural network can be illustrated as: DISPLAYFORM0 ",
"where INLINEFORM0 represents output of a neuron, INLINEFORM1 represents weights of the connections, INLINEFORM2 represents bias, INLINEFORM3 represents nonlinear activation function (sigmoid, tanh and ReLU are often used in practice)."
],
[
"The architecture of our proposed MTNet is shown in Fig. FIGREF24 . It includes 4 parts: an input layer for receiving raw data; shared layers for general feature extraction through stacked layers and non-linear transformation; branched layers for specific feature extraction; and the output layer with one neuron. The output of the last shared layer is fed into different branches. These branches are trained jointly. In the last shared layer, the information flow is split into many branches BIBREF7 , which enables feature sharing and reuse. Finally, the output result is calculated in the output layer by averaging outputs from these branches BIBREF10 . The overall neural network with different branches is trained in parallel. The detailed configuration of MTNet is listed in Tabel TABREF21 .",
"The advantages of MTNet are as follows:",
"With multi-branched layers, different data under diverse levels can be fed into different branches, which enables MTNet extract multi-level features for later regression.",
"Multi-branched architecture in our MTNet can also act as an ensemble method BIBREF10 , which promotes the performance as well.",
"We use mean square error (MSE) with INLINEFORM0 regularization as the cost function. DISPLAYFORM0 ",
"where INLINEFORM0 denotes the raw input of INLINEFORM1 -th data sample, INLINEFORM2 denotes the capacity of dataset, INLINEFORM3 denotes groundtruth score of INLINEFORM4 -th Zhihu Live. INLINEFORM5 denotes INLINEFORM6 regularization to prevent from overfitting."
],
[
"We implement our method based on Scikit-Learn BIBREF11 and PyTorch , and the experiments are conducted on a server with NVIDIA Tesla K80 GPU."
],
[
"Several features' types in ZhihuLive-DB are not numerical, while machine learning predictor can only support numerical values as input. We clean the original dataset through the following preprocessing methods.",
"For categorical features, we replace them with one-hot-encoding BIBREF11 .",
"The missing data is filled with Median of each attribute.",
"We normalize the numerical values with minimum subtraction and range division to ensure values [0, 1] intervals.",
"The review scores are used as labels in our experiments, our task is to precisely estimate the scores with MTNet. Since the data-driven methods are based on crowd wisdom on Zhihu Live platform, they don't need any additional labeling work, and ensure the reliability of the scores of judgment as well."
],
[
"Since feature selection plays an import part in a data mining task, conventional feature extraction methods need domain knowledge BIBREF12 . Feature selection influences model's performance dramatically BIBREF13 .",
"For conventional regression algorithms, we conduct feature selection by adopting the best Top K features through univariate statistical tests. The hyper-parameter such as regularization item INLINEFORM0 is determined through cross validation. For each regression algorithm mentioned above, the hyper-parameters are carefully tuned, and the hyper-parameters with the best performance are denoted as the final comparison results. The details of f_regression BIBREF14 , BIBREF11 feature selection are as follows:",
"We calculate the correlation between each regressor and label as: INLINEFORM0 .",
"We convert the correlation into an F score and then to a p-value.",
"Finally, we get 15-dimension feature vector as the input for conventional (non-deep learning based) regressors.",
"Deep neural network can learn more abstract features via stacked layers. Deep learning has empowered many AI tasks (like computer vision BIBREF6 and natural language processing BIBREF9 ) in an end-to-end fashion. We apply deep learning to our Zhihu Live quality evaluation problem. Furthermore, we also compare our MTNet algorithm with baseline models with carefully designed features."
],
[
"We train our MTNet with Adam optimizer for 20 epochs. We set batch size as 8, and weight decay as 1e-5, we adopt 3 branched layers in MTNet. Detailed configuration is shown in Table TABREF21 . We use ReLU in shared layers, and relu6 in branched layers to prevent information loss. Our proposed MTNet achieves 0.2250 on MAE and 0.3216 on RMSE, respectively.",
"We compare MTNet with other mainstream regression algorithms BIBREF14 (linear regression, KNN, SVR, Random Forest and MLP). The architecture of MLP is 15-16-8-8-1, where each number represents the number of neurons in each layer. We try three kinds of kernels (RBF kernel, linear kernel, and poly kernel) with SVR in our experiments for fair comparison.",
"The results are listed in Table TABREF37 . Our method achieves the best performance in contrast to the compared baseline regressors."
],
[
"In this paper, we adopt a data-driven approach which includes data collection, data cleaning, data normalization, descriptive analysis and predictive analysis, to evaluate the quality on Zhihu Live platform. To the best of our knowledge, we are the first to research quality evaluation of voice-answering products. We publicize a dataset named ZhihuLive-DB, which contains 7242 records and 286,938 comments text for researchers to evaluate Zhihu Lives' quality. We also make a detailed analysis to reveal inner insights about Zhihu Live. In addition, we propose MTNet to accurately predict Zhihu Lives' quality. Our proposed method achieves best performance compared with the baselines.",
"As knowledge sharing and Q&A platforms continue to gain a greater popularity, the released dataset ZhihuLive-DB could greatly help researchers in related fields. However, current data and attributes are relatively unitary in ZhihuLive-DB. The malicious comment and assessment on SNS platforms are also very important issues to be taken into consideration. In our future work, we will gather richer dataset, and integrate malicious comments detector into our data-driven approach."
],
[
"Supported by Foundation Research Funds for the Central Universities (Program No.2662017JC049) and State Scholarship Fund (NO.261606765054)."
]
],
"section_name": [
"Introduction",
"Data Collection",
"Statistical Analysis",
"Comments Text Analysis",
"Performance Metric",
"MTNet",
"Deep Neural Network",
"MTNet Architecture",
"Experiments",
"Data Preprocessing",
"Feature Selection",
"Experimental Results",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"658ed6c9075d8f28276b1ce27a9276c230d4e7e9",
"76cb0665bc128e25a6f40a861d626eba4bc7d147"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"06f56481438a6b1e45fec2977ae120551012e3a9",
"ceca7ff6e214ab0fca0b242b92ea2c942e64bf51"
],
"answer": [
{
"evidence": [
"The rating scores are within a range of INLINEFORM0 . We calculate min, Q1, median, Q3, max, mean, and mode about review count (see Table TABREF8 ). Because the number of received review may greatly influence the reliability of the review score. From Table TABREF8 we can see that many responses on Zhihu Live receive no review at all, which are useless for quality evaluation.",
"Deep neural network (DNN) has aroused dramatically attention due to their extraordinary performance in computer vision BIBREF6 , BIBREF7 , speech recognition BIBREF8 and natural language processing (NLP) BIBREF9 tasks. We apply DNN to our Zhihu Live quality evaluation problem aiming to approximate a function INLINEFORM0 which can accurately predict a Zhihu Live's score."
],
"extractive_spans": [],
"free_form_answer": "Rating scores given by users",
"highlighted_evidence": [
"The rating scores are within a range of INLINEFORM0 . ",
"We apply DNN to our Zhihu Live quality evaluation problem aiming to approximate a function INLINEFORM0 which can accurately predict a Zhihu Live's score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We define the quality evaluation problem as a standard regression task since the scores we aim to predict are continuous values. Hence we use Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) to estimate the performance of diverse learning algorithms. MAE and RMSE are used to evaluate the fit quality of the learning algorithms, if they are close to zero, it means the learning algorithm fits the dataset well. DISPLAYFORM0 DISPLAYFORM1"
],
"extractive_spans": [
"MAE and RMSE "
],
"free_form_answer": "",
"highlighted_evidence": [
"MAE and RMSE are used to evaluate the fit quality of the learning algorithms, if they are close to zero, it means the learning algorithm fits the dataset well."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Can their method be transferred to other Q&A platforms (in other languages)?",
"What measures of quality do they use for a Q&A platform?"
],
"question_id": [
"80d425258d027e3ca3750375d170debb9d92fbc6",
"2ae66798333b905172e2c0954e9808662ab7f221"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"FIGURE 1. The pipeline of our data-driven method.",
"TABLE 2. Statistics of review scores after deletion.",
"TABLE 1. Review count of ZhihuLive-DB.",
"TABLE 3. Influence of badges on Zhihu Live scores.",
"FIGURE 2. Hot words extraction via TextRank. Bigger font denotes higher weight.",
"TABLE 4. Average scores of different Zhihu live types.",
"TABLE 5. Statistical information of speakers’s gender and type.",
"TABLE 6. Detailed configuration of MTNet.",
"FIGURE 3. Overall architecture of MTNet.",
"TABLE 7. Performance comparison with other regression models. The best results are given in bold style."
],
"file": [
"1-Figure1-1.png",
"2-Table2-1.png",
"2-Table1-1.png",
"3-Table3-1.png",
"3-Figure2-1.png",
"3-Table4-1.png",
"3-Table5-1.png",
"4-Table6-1.png",
"4-Figure3-1.png",
"5-Table7-1.png"
]
} | [
"What measures of quality do they use for a Q&A platform?"
] | [
[
"1903.00384-Deep Neural Network-0",
"1903.00384-Statistical Analysis-0"
]
] | [
"Rating scores given by users"
] | 261 |
1809.00129 | Contextual Encoding for Translation Quality Estimation | The task of word-level quality estimation (QE) consists of taking a source sentence and machine-generated translation, and predicting which words in the output are correct and which are wrong. In this paper, propose a method to effectively encode the local and global contextual information for each target word using a three-part neural network approach. The first part uses an embedding layer to represent words and their part-of-speech tags in both languages. The second part leverages a one-dimensional convolution layer to integrate local context information for each target word. The third part applies a stack of feed-forward and recurrent neural networks to further encode the global context in the sentence before making the predictions. This model was submitted as the CMU entry to the WMT2018 shared task on QE, and achieves strong results, ranking first in three of the six tracks. | {
"paragraphs": [
[
"Quality estimation (QE) refers to the task of measuring the quality of machine translation (MT) system outputs without reference to the gold translations BIBREF0 , BIBREF1 . QE research has grown increasingly popular due to the improved quality of MT systems, and potential for reductions in post-editing time and the corresponding savings in labor costs BIBREF2 , BIBREF3 . QE can be performed on multiple granularities, including at word level, sentence level, or document level. In this paper, we focus on quality estimation at word level, which is framed as the task of performing binary classification of translated tokens, assigning “OK” or “BAD” labels.",
"Early work on this problem mainly focused on hand-crafted features with simple regression/classification models BIBREF4 , BIBREF5 . Recent papers have demonstrated that utilizing recurrent neural networks (RNN) can result in large gains in QE performance BIBREF6 . However, these approaches encode the context of the target word by merely concatenating its left and right context words, giving them limited ability to control the interaction between the local context and the target word.",
"In this paper, we propose a neural architecture, Context Encoding Quality Estimation (CEQE), for better encoding of context in word-level QE. Specifically, we leverage the power of both (1) convolution modules that automatically learn local patterns of surrounding words, and (2) hand-crafted features that allow the model to make more robust predictions in the face of a paucity of labeled data. Moreover, we further utilize stacked recurrent neural networks to capture the long-term dependencies and global context information from the whole sentence.",
"We tested our model on the official benchmark of the WMT18 word-level QE task. On this task, it achieved highly competitive results, with the best performance over other competitors on English-Czech, English-Latvian (NMT) and English-Latvian (SMT) word-level QE task, and ranking second place on English-German (NMT) and German-English word-level QE task."
],
[
"The QE module receives as input a tuple INLINEFORM0 , where INLINEFORM1 is the source sentence, INLINEFORM2 is the translated sentence, and INLINEFORM3 is a set of word alignments. It predicts as output a sequence INLINEFORM4 , with each INLINEFORM5 . The overall architecture is shown in Figure FIGREF2 ",
"CEQE consists of three major components: (1) embedding layers for words and part-of-speech (POS) tags in both languages, (2) convolution encoding of the local context for each target word, and (3) encoding the global context by the recurrent neural network."
],
[
"Inspired by BIBREF6 , the first embedding layer is a vector representing each target word INLINEFORM0 obtained by concatenating the embedding of that word with those of the aligned words INLINEFORM1 in the source. If a target word is aligned to multiple source words, we average the embedding of all the source words, and concatenate the target word embedding with its average source embedding. The immediate left and right contexts for source and target words are also concatenated, enriching the local context information of the embedding of target word INLINEFORM2 . Thus, the embedding of target word INLINEFORM3 , denoted as INLINEFORM4 , is a INLINEFORM5 dimensional vector, where INLINEFORM6 is the dimension of the word embeddings. The source and target words use the same embedding parameters, and thus identical words in both languages, such as digits and proper nouns, have the same embedding vectors. This allows the model to easily identify identical words in both languages. Similarly, the POS tags in both languages share the same embedding parameters. Table TABREF4 shows the statistics of the set of POS tags over all language pairs."
],
[
"The main difference between the our work and the neural model of BIBREF6 is the one-dimensional convolution layer. Convolutions provide a powerful way to extract local context features, analogous to implicitly learning INLINEFORM0 -gram features. We now describe this integral part of our model.",
"After embedding each word in the target sentence INLINEFORM0 , we obtain a matrix of embeddings for the target sequence, INLINEFORM1 ",
"where INLINEFORM0 is the column-wise concatenation operator. We then apply one-dimensional convolution BIBREF7 , BIBREF8 on INLINEFORM1 along the target sequence to extract the local context of each target word. Specifically, a one-dimensional convolution involves a filter INLINEFORM2 , which is applied to a window of INLINEFORM3 words in target sequence to produce new features. INLINEFORM4 ",
"where INLINEFORM0 is a bias term and INLINEFORM1 is some functions. This filter is applied to each possible window of words in the embedding of target sentence INLINEFORM2 to produce features INLINEFORM3 ",
"By the padding proportionally to the filter size INLINEFORM0 at the beginning and the end of target sentence, we can obtain new features INLINEFORM1 of target sequence with output size equals to input sentence length INLINEFORM2 . To capture various granularities of local context, we consider filters with multiple window sizes INLINEFORM3 , and multiple filters INLINEFORM4 are learned for each window size.",
"The output of the one-dimensional convolution layer, INLINEFORM0 , is then concatenated with the embedding of POS tags of the target words, as well as its aligned source words, to provide a more direct signal to the following recurrent layers."
],
[
"After we obtain the representation of the source-target word pair by the convolution layer, we follow a similar architecture as BIBREF6 to refine the representation of the word pairs using feed-forward and recurrent networks.",
"Two feed-forward layers of size 400 with rectified linear units (ReLU; BIBREF9 );",
"One bi-directional gated recurrent unit (BiGRU; BIBREF10 ) layer with hidden size 200, where the forward and backward hidden states are concatenated and further normalized by layer normalization BIBREF11 .",
"Two feed-forward layers of hidden size 200 with rectified linear units;",
"One BiGRU layer with hidden size 100 using the same configuration of the previous BiGRU layer;",
"Two feed-forward layers of size 100 and 50 respectively with ReLU activation.",
"We concatenate the 31 baseline features extracted by the Marmot toolkit with the last 50 feed-forward hidden features. The baseline features are listed in Table TABREF13 . We then apply a softmax layer on the combined features to predict the binary labels."
],
[
"We minimize the binary cross-entropy loss between the predicted outputs and the targets. We train our neural model with mini-batch size 8 using Adam BIBREF12 with learning rate INLINEFORM0 and decay the learning rate by multiplying INLINEFORM1 if the F1-Multi score on the validation set decreases during the validation. Gradient norms are clipped within 5 to prevent gradient explosion for feed-forward networks or recurrent neural networks. Since the training corpus is rather small, we use dropout BIBREF13 with probability INLINEFORM2 to prevent overfitting."
],
[
"We evaluate our CEQE model on the WMT2018 Quality Estimation Shared Task for word-level English-German, German-English, English-Czech, and English-Latvian QE. Words in all languages are lowercased. The evaluation metric is the multiplication of F1-scores for the “OK” and “BAD” classes against the true labels. F1-score is the harmonic mean of precision and recall. In Table TABREF15 , our model achieves the best performance on three out of six test sets in the WMT 2018 word-level QE shared task."
],
[
"In Table TABREF21 , we show the ablation study of the features used in our model on English-German, German-English, and English-Czech. For each language pair, we show the performance of CEQE without adding the corresponding components specified in the second column respectively. The last row shows the performance of the complete CEQE with all the components. As the baseline features released in the WMT2018 QE Shared Task for English-Latvian are incomplete, we train our CEQE model without using such features. We can glean several observations from this data:",
"Because the number of “OK” tags is much larger than the number of “BAD” tags, the model is easily biased towards predicting the “OK” tag for each target word. The F1-OK scores are higher than the F1-BAD scores across all the language pairs.",
"For German-English, English Czech, and English-German (SMT), adding the baseline features can significantly improve the F1-BAD scores.",
"For English-Czech, English-German (SMT), and English-German (NMT), removing POS tags makes the model more biased towards predicting “OK” tags, which leads to higher F1-OK scores and lower F1-BAD scores.",
"Adding the convolution layer helps to boost the performance of F1-Multi, especially on English-Czech and English-Germen (SMT) tasks. Comparing the F1-OK scores of the model with and without the convolution layer, we find that adding the convolution layer help to boost the F1-OK scores when translating from English to other languages, i.e., English-Czech, English-German (SMT and NMT). We conjecture that the convolution layer can capture the local information more effectively from the aligned source words in English."
],
[
"Table TABREF22 shows two examples of quality prediction on the validation data of WMT2018 QE task for English-Czech. In the first example, the model without POS tags and baseline features is biased towards predicting “OK” tags, while the model with full features can detect the reordering error. In the second example, the target word “panelu” is a variant of the reference word “panel”. The target word “znaky” is the plural noun of the reference “znak”. Thus, their POS tags have some subtle differences. Note the target word “zmnit” and its aligned source word “change” are both verbs. We can observe that POS tags can help the model capture such syntactic variants."
],
[
"During training, we find that the model can easily overfit the training data, which yields poor performance on the test and validation sets. To make the model more stable on the unseen data, we apply dropout to the word embeddings, POS embeddings, vectors after the convolutional layers and the stacked recurrent layers. In Figure FIGREF24 , we examine the accuracies dropout rates in INLINEFORM0 . We find that adding dropout alleviates overfitting issues on the training set. If we reduce the dropout rate to INLINEFORM1 , which means randomly setting some values to zero with probability INLINEFORM2 , the training F1-Multi increases rapidly and the validation F1-multi score is the lowest among all the settings. Preliminary results proved best for a dropout rate of INLINEFORM3 , so we use this in all the experiments."
],
[
"In this paper, we propose a deep neural architecture for word-level QE. Our framework leverages a one-dimensional convolution on the concatenated word embeddings of target and its aligned source words to extract salient local feature maps. In additions, bidirectional RNNs are applied to capture temporal dependencies for better sequence prediction. We conduct thorough experiments on four language pairs in the WMT2018 shared task. The proposed framework achieves highly competitive results, outperforms all other participants on English-Czech and English-Latvian word-level, and is second place on English-German, and German-English language pairs."
],
[
"The authors thank Andre Martins for his advice regarding the word-level QE task.",
"This work is sponsored by Defense Advanced Research Projects Agency Information Innovation Office (I2O). Program: Low Resource Languages for Emergent Incidents (LORELEI). Issued by DARPA/I2O under Contract No. HR0011-15-C0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on."
]
],
"section_name": [
"Introduction",
"Model",
"Embedding Layer",
"One-dimensional Convolution Layer",
"RNN-based Encoding",
"Training",
"Experiment",
"Ablation Analysis",
"Case Study",
"Sensitivity Analysis",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"12f926c00e34664f27df826ef4f14fa1181709e8",
"715daa16337076d732bc2384ad716a29e07ed08b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"06f9122c9f70c74d32aa4a701e6a32db8e565c97",
"3cee4fc1b12497f86fdc68f6b392edf41ce1a0cb"
],
"answer": [
{
"evidence": [
"After we obtain the representation of the source-target word pair by the convolution layer, we follow a similar architecture as BIBREF6 to refine the representation of the word pairs using feed-forward and recurrent networks.",
"Two feed-forward layers of size 400 with rectified linear units (ReLU; BIBREF9 );",
"One bi-directional gated recurrent unit (BiGRU; BIBREF10 ) layer with hidden size 200, where the forward and backward hidden states are concatenated and further normalized by layer normalization BIBREF11 .",
"Two feed-forward layers of hidden size 200 with rectified linear units;",
"One BiGRU layer with hidden size 100 using the same configuration of the previous BiGRU layer;",
"Two feed-forward layers of size 100 and 50 respectively with ReLU activation."
],
"extractive_spans": [],
"free_form_answer": "8",
"highlighted_evidence": [
"After we obtain the representation of the source-target word pair by the convolution layer, we follow a similar architecture as BIBREF6 to refine the representation of the word pairs using feed-forward and recurrent networks.\n\nTwo feed-forward layers of size 400 with rectified linear units (ReLU; BIBREF9 );\n\nOne bi-directional gated recurrent unit (BiGRU; BIBREF10 ) layer with hidden size 200, where the forward and backward hidden states are concatenated and further normalized by layer normalization BIBREF11 .\n\nTwo feed-forward layers of hidden size 200 with rectified linear units;\n\nOne BiGRU layer with hidden size 100 using the same configuration of the previous BiGRU layer;\n\nTwo feed-forward layers of size 100 and 50 respectively with ReLU activation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"CEQE consists of three major components: (1) embedding layers for words and part-of-speech (POS) tags in both languages, (2) convolution encoding of the local context for each target word, and (3) encoding the global context by the recurrent neural network.",
"RNN-based Encoding",
"After we obtain the representation of the source-target word pair by the convolution layer, we follow a similar architecture as BIBREF6 to refine the representation of the word pairs using feed-forward and recurrent networks.",
"Two feed-forward layers of size 400 with rectified linear units (ReLU; BIBREF9 );",
"One bi-directional gated recurrent unit (BiGRU; BIBREF10 ) layer with hidden size 200, where the forward and backward hidden states are concatenated and further normalized by layer normalization BIBREF11 .",
"Two feed-forward layers of hidden size 200 with rectified linear units;",
"One BiGRU layer with hidden size 100 using the same configuration of the previous BiGRU layer;",
"Two feed-forward layers of size 100 and 50 respectively with ReLU activation."
],
"extractive_spans": [],
"free_form_answer": "2",
"highlighted_evidence": [
"CEQE consists of three major components: (1) embedding layers for words and part-of-speech (POS) tags in both languages, (2) convolution encoding of the local context for each target word, and (3) encoding the global context by the recurrent neural network.\n\n",
"RNN-based Encoding\nAfter we obtain the representation of the source-target word pair by the convolution layer, we follow a similar architecture as BIBREF6 to refine the representation of the word pairs using feed-forward and recurrent networks.\n\nTwo feed-forward layers of size 400 with rectified linear units (ReLU; BIBREF9 );\n\nOne bi-directional gated recurrent unit (BiGRU; BIBREF10 ) layer with hidden size 200, where the forward and backward hidden states are concatenated and further normalized by layer normalization BIBREF11 .\n\nTwo feed-forward layers of hidden size 200 with rectified linear units;\n\nOne BiGRU layer with hidden size 100 using the same configuration of the previous BiGRU layer;\n\nTwo feed-forward layers of size 100 and 50 respectively with ReLU activation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"cd8569a78d269adc4e4cd44e61214bbb37417f79",
"e04e2435855946e267d0587c11c19f6d0875f9b2"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Best performance of our model on six datasets in the WMT2018 word-level QE shared task on the leader board (updated on July 27th 2018)"
],
"extractive_spans": [],
"free_form_answer": "Second on De-En and En-De (NMT) tasks, and third on En-De (SMT) task.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Best performance of our model on six datasets in the WMT2018 word-level QE shared task on the leader board (updated on July 27th 2018)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our CEQE model on the WMT2018 Quality Estimation Shared Task for word-level English-German, German-English, English-Czech, and English-Latvian QE. Words in all languages are lowercased. The evaluation metric is the multiplication of F1-scores for the “OK” and “BAD” classes against the true labels. F1-score is the harmonic mean of precision and recall. In Table TABREF15 , our model achieves the best performance on three out of six test sets in the WMT 2018 word-level QE shared task.",
"FLOAT SELECTED: Table 3: Best performance of our model on six datasets in the WMT2018 word-level QE shared task on the leader board (updated on July 27th 2018)"
],
"extractive_spans": [],
"free_form_answer": "3rd in En-De (SMT), 2nd in En-De (NNT) and 2nd ibn De-En",
"highlighted_evidence": [
"In Table TABREF15 , our model achieves the best performance on three out of six test sets in the WMT 2018 word-level QE shared task.",
"FLOAT SELECTED: Table 3: Best performance of our model on six datasets in the WMT2018 word-level QE shared task on the leader board (updated on July 27th 2018)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they evaluate whether local or global context proves more important?",
"How many layers of recurrent neural networks do they use for encoding the global context?",
"How did their model rank in three CMU WMT2018 tracks it didn't rank first?"
],
"question_id": [
"9d80ad8cf4d5941a32d33273dc5678195ad1e0d2",
"bd817a520a62ddd77e65e74e5a7e9006cdfb19b3",
"c635295c2b77aaab28faecca3b5767b0c4ab3728"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Statistics of POS tags over all language pairs",
"Figure 1: The architecture of our model, with the convolutional encoder on the left, and stacked RNN on the right.",
"Table 2: Baseline Features",
"Table 3: Best performance of our model on six datasets in the WMT2018 word-level QE shared task on the leader board (updated on July 27th 2018)",
"Figure 2: Effect of the dropout rate during training.",
"Table 4: Ablation study on the WMT18 Test Set",
"Table 5: Examples on WMT2018 validation data. The source and translated sentences, the reference sentences, the predictions of the CEQE without and with POS tags and baseline features are shown. Words predicted as OK are shown in green, those predicted as BAD are shown in red, the difference between the translated and reference sentences are shown in blue."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Figure2-1.png",
"5-Table4-1.png",
"6-Table5-1.png"
]
} | [
"How many layers of recurrent neural networks do they use for encoding the global context?",
"How did their model rank in three CMU WMT2018 tracks it didn't rank first?"
] | [
[
"1809.00129-RNN-based Encoding-5",
"1809.00129-RNN-based Encoding-4",
"1809.00129-Model-1",
"1809.00129-RNN-based Encoding-2",
"1809.00129-RNN-based Encoding-1",
"1809.00129-RNN-based Encoding-3",
"1809.00129-RNN-based Encoding-0"
],
[
"1809.00129-4-Table3-1.png",
"1809.00129-Experiment-0"
]
] | [
"2",
"3rd in En-De (SMT), 2nd in En-De (NNT) and 2nd ibn De-En"
] | 262 |
1710.11027 | Named Entity Recognition in Twitter using Images and Text | Named Entity Recognition (NER) is an important subtask of information extraction that seeks to locate and recognise named entities. Despite recent achievements, we still face limitations with correctly detecting and classifying entities, prominently in short and noisy text, such as Twitter. An important negative aspect in most of NER approaches is the high dependency on hand-crafted features and domain-specific knowledge, necessary to achieve state-of-the-art results. Thus, devising models to deal with such linguistically complex contexts is still challenging. In this paper, we propose a novel multi-level architecture that does not rely on any specific linguistic resource or encoded rule. Unlike traditional approaches, we use features extracted from images and text to classify named entities. Experimental tests against state-of-the-art NER for Twitter on the Ritter dataset present competitive results (0.59 F-measure), indicating that this approach may lead towards better NER models. | {
"paragraphs": [
[
"Named Entity Recognition (NER) is an important step in most of the natural language processing (NLP) pipelines. It is designed to robustly handle proper names, which is essential for many applications. Although a seemingly simple task, it faces a number of challenges in noisy datasets and it is still considered an emerging research area BIBREF0 , BIBREF1 . Despite recent efforts, we still face limitations at identifying entities and (consequently) correctly classifying them. Current state-of-the-art NER systems typically have about 85-90% accuracy on news text - such as articles (CoNLL03 shared task data set) - but they still perform poorly (about 30-50% accuracy) on short texts, which do not have implicit linguistic formalism (e.g. punctuation, spelling, spacing, formatting, unorthodox capitalisation, emoticons, abbreviations and hashtags) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF1 . Furthermore, the lack of external knowledge resources is an important gap in the process regardless of writing style BIBREF5 . To face these problems, research has been focusing on microblog-specific information extraction techniques BIBREF2 , BIBREF6 .",
"In this paper, we propose a joint clustering architecture that aims at minimizing the current gap between world knowledge and knowledge available in open domain knowledge bases (e.g., Freebase) for NER systems, by extracting features from unstructured data sources. To this aim, we use images and text from the web as input data. Thus, instead of relying on encoded information and manually annotated resources (the major limitation in NER architectures) we focus on a multi-level approach for discovering named entities, combining text and image features with a final classifier based on a decision tree model. We follow an intuitive and simple idea: some types of images are more related to people (e.g. faces) whereas some others are more related to organisations (e.g. logos), for instance. This principle is applied similarly to the text retrieved from websites: keywords for search engines representing names and surnames of people will often return similarly related texts, for instance. Thus, we derive some indicators (detailed in sec:finalclassifier which are then used as input features in a final classifier.",
"To the best of our knowledge, this is the first report of a NER architecture which aims to provide a priori information based on clusters of images and text features."
],
[
"Over the past few years, the problem of recognizing named entities in natural language texts has been addressed by several approaches and frameworks BIBREF7 , BIBREF8 . Existing approaches basically adopt look-up strategies and use standard local features, such as part-of-speech tags, previous and next words, substrings, shapes and regex expressions, for instance. The main drawback is the performance of those models with noisy data, such as Tweets. A major reason is that they rely heavily on hand-crafted features and domain-specific knowledge. In terms of architecture, NER algorithms may also be designed based on generative (e.g., Naive Bayes) or discriminative (e.g., MaxEnt) models. Furthermore, sequence models (HMMs, CMM, MEMM and CRF) are a natural choice to design such systems. A more recent study proposed by Lample et al., 2016 BIBREF9 used neural architectures to solve this problem. Similarly in terms of architecture, Al-Rfou et al., 2015 BIBREF10 had also proposed a model (without dependency) that learns distributed word representations (word embeddings) which encode semantic and syntactic features of words in each language. Chiu and Nichols, 2015 BIBREF11 proposed a neural network architecture that automatically detects word and character-level features using a hybrid bidirectional LSTM and CNN. Thus, these models work without resorting to any language-specific knowledge or resources such as gazetteers. They, however, focused on newswire to improve current state-of-the-art systems and not on the microblogs context, in which they are naturally harder to outperform due to the aforementioned issues. According to Derczynski et al., 2015 BIBREF1 some approaches have been proposed for Twitter, but they are mostly still in development and often not freely available."
],
[
"The main insight underlying this work is that we can produce a NER model which performs similarly to state-of-the-art approaches but without relying on any specific resource or encoded rule. To this aim, we propose a multi-level architecture which intends to produce biased indicators to a certain class (LOC, PER or ORG). These outcomes are then used as input features for our final classifier. We perform clustering on images and texts associated to a given term INLINEFORM0 existing in complete or partial sentences INLINEFORM1 (e.g., “new york” or “einstein”), leveraging the global context obtained from the Web providing valuable insights apart from standard local features and hand-coded information. fig:architecture gives an overview of the proposed architecture.",
"In the first step (A), we simply apply POS Tagging and Shallow Parsing to filter out tokens except for those tagged as INLINEFORM0 or INLINEFORM1 and their INLINEFORM2 (local context). Afterwards, we use the search engine (B) to query and cache (C) the top INLINEFORM3 texts and images associated to each term INLINEFORM4 , where INLINEFORM5 is the set resulting of the pre-processing step (A) for each (partial or complete) sentence INLINEFORM6 . This resulting data (composed of excerpts of texts and images from web pages) is used to predict a possible class for a given term. These outcomes are then used in the first two levels (D.1 and D.2) of our approach: the Computer Vision and Text Analytics components, respectively, which we introduce as follows:",
"Computer Vision (CV): Detecting Objects: Function Description (D.1): given a set of images INLINEFORM0 , the basic idea behind this component is to detect a specific object (denoted by a class INLINEFORM1 ) in each image. Thus, we query the web for a given term INLINEFORM2 and then extract the features from each image and try to detect a specific object (e.g., logos for ORG) for the top INLINEFORM3 images retrieved as source candidates. The mapping between objects and NER classes is detailed in tab:tbempirical.",
"Training (D.1): we used SIFT (Scale Invariant Feature Transform) features BIBREF12 for extracting image descriptors and BoF (Bag of Features) BIBREF13 , BIBREF14 for clustering the histograms of extracted features. The clustering is possible by constructing a large vocabulary of many visual words and representing each image as a histogram of the frequency words that are in the image. We use k-means BIBREF15 to cluster the set of descriptors to INLINEFORM0 clusters. The resulting clusters are compact and separated by similar characteristics. An empirical analysis shows that some image groups are often related to certain named entities (NE) classes when using search engines, as described in tab:tbempirical. For training purposes, we used the Scene 13 dataset BIBREF16 to train our classifiers for location (LOC), “faces” from Caltech 101 Object Categories BIBREF17 to train our person (PER) and logos from METU dataset BIBREF18 for organisation ORG object detection. These datasets produces the training data for our set of supervised classifiers (1 for ORG, 1 for PER and 10 for LOC). We trained our classifiers using Support Vector Machines BIBREF19 once they generalize reasonably enough for the task.",
"Text Analytics (TA): Text Classification - Function Description (D.2): analogously to (D.1), we perform clustering to group texts together that are “distributively” similar. Thus, for each retrieved web page (title and excerpt of its content), we perform the classification based on the main NER classes. We extracted features using a classical sparse vectorizer (Term frequency-Inverse document frequency - TF-IDF. In experiments, we did not find a significant performance gain using HashingVectorizer) - Training (D.2): with this objective in mind, we trained classifiers that rely on a bag-of-words technique. We collected data using DBpedia instances to create our training dataset ( INLINEFORM0 ) and annotated each instance with the respective MUC classes, i.e. PER, ORG and LOC. Listing shows an example of a query to obtain documents of organizations (ORG class). Thereafter, we used this annotated dataset to train our model.",
"where INLINEFORM0 and INLINEFORM1 represent the INLINEFORM2 and INLINEFORM3 position of INLINEFORM4 and INLINEFORM5 , respectively. INLINEFORM6 represents the n-gram of POS tag. INLINEFORM7 and INLINEFORM8 ( INLINEFORM9 ) represent the total objects found by a classifier INLINEFORM10 for a given class INLINEFORM11 ( INLINEFORM12 ) (where N is the total of retrieved images INLINEFORM15 ). INLINEFORM16 and INLINEFORM17 represent the distance between the two higher predictions ( INLINEFORM18 ), i.e. INLINEFORM19 . Finally, INLINEFORM20 represents the sum of all predictions made by all INLINEFORM21 classifiers INLINEFORM22 ( INLINEFORM23 ). - Training (E): the outcomes of D.1 and D.2 ( INLINEFORM26 ) are used as input features to our final classifier. We implemented a simple Decision Tree (non-parametric supervised learning method) algorithm for learning simple decision rules inferred from the data features (since it does not require any assumptions of linearity in the data and also works well with outliers, which are expected to be found more often in a noisy environment, such as the Web of Documents)."
],
[
"In order to check the overall performance of the proposed technique, we ran our algorithm without any further rule or apriori knowledge using a gold standard for NER in microblogs (Ritter dataset BIBREF2 ), achieving INLINEFORM0 F1. tab:performance details the performance measures per class. tab:relatedwork presents current state-of-the-art results for the same dataset. The best model achieves INLINEFORM1 F1-measure, but uses encoded rules. Models which are not rule-based, achieve INLINEFORM2 and INLINEFORM3 . We argue that in combination with existing techniques (such as linguistic patterns), we can potentially achieve even better results.",
"As an example, the sentence “paris hilton was once the toast of the town” can show the potential of the proposed approach. The token “paris” with a LOC bias (0.6) and “hilton” (global brand of hotels and resorts) with indicators leading to LOC (0.7) or ORG (0.1, less likely though). Furthermore, “town” being correctly biased to LOC (0.7). The algorithm also suggests that the compound “paris hilton” is more likely to be a PER instead (0.7) and updates (correctly) the previous predictions. As a downside in this example, the algorithm misclassified “toast” as LOC. However, in this same example, Stanford NER annotates (mistakenly) only “paris” as LOC. It is worth noting also the ability of the algorithm to take advantage of search engine capabilities. When searching for “miCRs0ft”, the returned values strongly indicate a bias for ORG, as expected ( INLINEFORM0 = 0.2, INLINEFORM1 = 0.8, INLINEFORM2 = 0.0, INLINEFORM3 = 6, INLINEFORM4 = -56, INLINEFORM5 = 0.0, INLINEFORM6 = 0.5, INLINEFORM7 = 0.0, INLINEFORM8 = 5). More local organisations are also recognized correctly, such as “kaufland” (German supermarket), which returns the following metadata: INLINEFORM9 = 0.2, INLINEFORM10 = 0.4, INLINEFORM11 = 0.0, INLINEFORM12 = 2, INLINEFORM13 = -50, INLINEFORM14 = 0.1, INLINEFORM15 = 0.4, INLINEFORM16 = 0.0, INLINEFORM17 = 3."
],
[
"A disadvantage when using web search engines is that they are not open and free. This can be circumvented by indexing and searching on other large sources of information, such as Common Crawl and Flickr. However, maintaining a large source of images would be an issue, e.g. the Flickr dataset may not be comprehensive enough (i.e. tokens may not return results). This will be a subject of future work. Besides, an important step in the pre-processing is the classification of part-of-speech tags. In the Ritter dataset our current error propagation is 0.09 (107 tokens which should be classified as NOUN) using NLTK 3.0. Despite good performance (91% accuracy), we plan to benchmark this component. In terms of processing time, the bottleneck of the current implementation is the time required to extract features from images, as expected. Currently we achieve a performance of 3~5 seconds per sentence and plan to also optimize this component. The major advantages of this approach are: 1) the fact that there are no hand-crafted rules encoded; 2) the ability to handle misspelled words (because the search engine alleviates that and returns relevant or related information) and incomplete sentences; 3) the generic design of its components, allowing multilingual processing with little effort (the only dependency is the POS tagger) and straightforward extension to support more NER classes (requiring a corpus of images and text associated to each desired NER class, which can be obtained from a Knowledge Base, such as DBpedia, and an image dataset, such as METU dataset). While initial results in a gold standard dataset showed the potential of the approach, we also plan to integrate these outcomes into a Sequence Labeling (SL) system, including neural architectures such as LSTM, which are more suitable for such tasks as NER or POS. We argue that this can potentially reduce the existing (significant) gap in NER performance on microblogs."
],
[
"In this paper we presented a novel architecture for NER that expands the feature set space based on feature clustering of images and texts, focused on microblogs. Due to their terse nature, such noisy data often lack enough context, which poses a challenge to the correct identification of named entities. To address this issue we have presented and evaluated a novel approach using the Ritter dataset, showing consistent results over state-of-the-art models without using any external resource or encoded rule, achieving an average of 0.59 F1. The results slightly outperformed state-of-the-art models which do not rely on encoded rules (0.49 and 0.54 F1), suggesting the viability of using the produced metadata to also boost existing NER approaches. A further important contribution is the ability to handle single tokens and misspelled words successfully, which is of utmost importance in order to better understand short texts. Finally, the architecture of the approach and its indicators introduce potential to transparently support multilingual data, which is the subject of ongoing investigation."
],
[
"This research was supported in part by an EU H2020 grant provided for the HOBBIT project (GA no. 688227) and CAPES Foundation (BEX 10179135)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Conceptual Architecture",
"Experiments",
"Discussion",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"c5e937d4ce4439bba9ec8421fd6bad34d72dcec5",
"c7ef7c046ade7d45ffe84d1b62a5f2d9a849b2c8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"7a2953cf164a1abd38baa58f11cb76d7f8915941",
"fee3d4d518d109cd664546cf20c3b69666a11e66"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In order to check the overall performance of the proposed technique, we ran our algorithm without any further rule or apriori knowledge using a gold standard for NER in microblogs (Ritter dataset BIBREF2 ), achieving INLINEFORM0 F1. tab:performance details the performance measures per class. tab:relatedwork presents current state-of-the-art results for the same dataset. The best model achieves INLINEFORM1 F1-measure, but uses encoded rules. Models which are not rule-based, achieve INLINEFORM2 and INLINEFORM3 . We argue that in combination with existing techniques (such as linguistic patterns), we can potentially achieve even better results."
],
"extractive_spans": [
" a gold standard for NER in microblogs"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to check the overall performance of the proposed technique, we ran our algorithm without any further rule or apriori knowledge using a gold standard for NER in microblogs (Ritter dataset BIBREF2 ), "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"950fdcdc2d63aa8b87199a9362e9ab35d061c506",
"eb8529b4c442b50c7b4738a22f422f239387c38c"
],
"answer": [
{
"evidence": [
"In this paper we presented a novel architecture for NER that expands the feature set space based on feature clustering of images and texts, focused on microblogs. Due to their terse nature, such noisy data often lack enough context, which poses a challenge to the correct identification of named entities. To address this issue we have presented and evaluated a novel approach using the Ritter dataset, showing consistent results over state-of-the-art models without using any external resource or encoded rule, achieving an average of 0.59 F1. The results slightly outperformed state-of-the-art models which do not rely on encoded rules (0.49 and 0.54 F1), suggesting the viability of using the produced metadata to also boost existing NER approaches. A further important contribution is the ability to handle single tokens and misspelled words successfully, which is of utmost importance in order to better understand short texts. Finally, the architecture of the approach and its indicators introduce potential to transparently support multilingual data, which is the subject of ongoing investigation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The results slightly outperformed state-of-the-art models which do not rely on encoded rules (0.49 and 0.54 F1), suggesting the viability of using the produced metadata to also boost existing NER approaches."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"In order to check the overall performance of the proposed technique, we ran our algorithm without any further rule or apriori knowledge using a gold standard for NER in microblogs (Ritter dataset BIBREF2 ), achieving INLINEFORM0 F1. tab:performance details the performance measures per class. tab:relatedwork presents current state-of-the-art results for the same dataset. The best model achieves INLINEFORM1 F1-measure, but uses encoded rules. Models which are not rule-based, achieve INLINEFORM2 and INLINEFORM3 . We argue that in combination with existing techniques (such as linguistic patterns), we can potentially achieve even better results."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"\nIn order to check the overall performance of the proposed technique, we ran our algorithm without any further rule or apriori knowledge using a gold standard for NER in microblogs (Ritter dataset BIBREF2 ), achieving INLINEFORM0 F1. tab:performance details the performance measures per class. tab:relatedwork presents current state-of-the-art results for the same dataset. The best model achieves INLINEFORM1 F1-measure, but uses encoded rules. Models which are not rule-based, achieve INLINEFORM2 and INLINEFORM3 . "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"39e7fe0c1ad83de99005d660fbbe0e87a11fe031",
"f55939ff6c8f9725be300477bdc792329ad51839"
],
"answer": [
{
"evidence": [
"Text Analytics (TA): Text Classification - Function Description (D.2): analogously to (D.1), we perform clustering to group texts together that are “distributively” similar. Thus, for each retrieved web page (title and excerpt of its content), we perform the classification based on the main NER classes. We extracted features using a classical sparse vectorizer (Term frequency-Inverse document frequency - TF-IDF. In experiments, we did not find a significant performance gain using HashingVectorizer) - Training (D.2): with this objective in mind, we trained classifiers that rely on a bag-of-words technique. We collected data using DBpedia instances to create our training dataset ( INLINEFORM0 ) and annotated each instance with the respective MUC classes, i.e. PER, ORG and LOC. Listing shows an example of a query to obtain documents of organizations (ORG class). Thereafter, we used this annotated dataset to train our model."
],
"extractive_spans": [],
"free_form_answer": "word feature",
"highlighted_evidence": [
" Thus, for each retrieved web page (title and excerpt of its content), we perform the classification based on the main NER classes. We extracted features using a classical sparse vectorizer (Term frequency-Inverse document frequency - TF-IDF. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Text Analytics (TA): Text Classification - Function Description (D.2): analogously to (D.1), we perform clustering to group texts together that are “distributively” similar. Thus, for each retrieved web page (title and excerpt of its content), we perform the classification based on the main NER classes. We extracted features using a classical sparse vectorizer (Term frequency-Inverse document frequency - TF-IDF. In experiments, we did not find a significant performance gain using HashingVectorizer) - Training (D.2): with this objective in mind, we trained classifiers that rely on a bag-of-words technique. We collected data using DBpedia instances to create our training dataset ( INLINEFORM0 ) and annotated each instance with the respective MUC classes, i.e. PER, ORG and LOC. Listing shows an example of a query to obtain documents of organizations (ORG class). Thereafter, we used this annotated dataset to train our model."
],
"extractive_spans": [
"extracted features using a classical sparse vectorizer (Term frequency-Inverse document frequency - TF-ID"
],
"free_form_answer": "",
"highlighted_evidence": [
"We extracted features using a classical sparse vectorizer (Term frequency-Inverse document frequency - TF-IDF."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"072a2d6789977ee07b22090949d0069cc1d09e88",
"e68c4738f78210b1e2762d16f6d00df746a31d55"
],
"answer": [
{
"evidence": [
"Computer Vision (CV): Detecting Objects: Function Description (D.1): given a set of images INLINEFORM0 , the basic idea behind this component is to detect a specific object (denoted by a class INLINEFORM1 ) in each image. Thus, we query the web for a given term INLINEFORM2 and then extract the features from each image and try to detect a specific object (e.g., logos for ORG) for the top INLINEFORM3 images retrieved as source candidates. The mapping between objects and NER classes is detailed in tab:tbempirical."
],
"extractive_spans": [],
"free_form_answer": "LOC (Building, Suburb, Street, City, Country, Mountain, Highway, Forest, Coast and Map), ORG (Company Logo), PER (Human Face ).",
"highlighted_evidence": [
"Thus, we query the web for a given term INLINEFORM2 and then extract the features from each image and try to detect a specific object (e.g., logos for ORG) for the top INLINEFORM3 images retrieved as source candidates. The mapping between objects and NER classes is detailed in tab:tbempirical."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Training (D.1): we used SIFT (Scale Invariant Feature Transform) features BIBREF12 for extracting image descriptors and BoF (Bag of Features) BIBREF13 , BIBREF14 for clustering the histograms of extracted features. The clustering is possible by constructing a large vocabulary of many visual words and representing each image as a histogram of the frequency words that are in the image. We use k-means BIBREF15 to cluster the set of descriptors to INLINEFORM0 clusters. The resulting clusters are compact and separated by similar characteristics. An empirical analysis shows that some image groups are often related to certain named entities (NE) classes when using search engines, as described in tab:tbempirical. For training purposes, we used the Scene 13 dataset BIBREF16 to train our classifiers for location (LOC), “faces” from Caltech 101 Object Categories BIBREF17 to train our person (PER) and logos from METU dataset BIBREF18 for organisation ORG object detection. These datasets produces the training data for our set of supervised classifiers (1 for ORG, 1 for PER and 10 for LOC). We trained our classifiers using Support Vector Machines BIBREF19 once they generalize reasonably enough for the task."
],
"extractive_spans": [
"BoF (Bag of Features) BIBREF13",
"SIFT (Scale Invariant Feature Transform) features BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"Training (D.1): we used SIFT (Scale Invariant Feature Transform) features BIBREF12 for extracting image descriptors and BoF (Bag of Features) BIBREF13 , BIBREF14 for clustering the histograms of extracted features."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate only on English datasets?",
"What is the Ritter dataset?",
"Does this model perform better than the state of the art?",
"What features are extracted from text?",
"What features are extracted from images?"
],
"question_id": [
"7f8fc3c7d59aba80a3e7c839db6892a1fc329210",
"2d92ae6b36567e7edb6afdd72f97b06ac144fbdf",
"a5df7361ae37b9512fb57cb93efbece9ded8cab1",
"915e4d0b3cb03789a20380ead961d473cb95bfc3",
"c01a8b42fd27b0a3bec717ededd98b6d085a0f5c"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1: Overview of the approach: combining computer vision and machine learning in a generic NER architecture",
"Table 1: NER classes and respective objects to be detected in a given image. For LOC we trained more models due to the diversity of the object candidates.",
"Table 2: Performancemeasure for our approach in Ritter dataset: 4-fold cross validation",
"Table 3: Performance measures (PER, ORG and LOC classes) of state-of-the-art NER for short texts (Ritter dataset). Approaches which do not rely on hand-crafted rules and Gazetteers are highlighted in gray. Etter et al., 2013 trained using 10 classes."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png"
]
} | [
"What features are extracted from text?",
"What features are extracted from images?"
] | [
[
"1710.11027-Conceptual Architecture-4"
],
[
"1710.11027-Conceptual Architecture-2",
"1710.11027-Conceptual Architecture-3"
]
] | [
"word feature",
"LOC (Building, Suburb, Street, City, Country, Mountain, Highway, Forest, Coast and Map), ORG (Company Logo), PER (Human Face )."
] | 263 |
1911.10401 | A Transformer-based approach to Irony and Sarcasm detection | Figurative Language (FL) seems ubiquitous in all social-media discussion forums and chats, posing extra challenges to sentiment analysis endeavors. Identification of FL schemas in short texts remains largely an unresolved issue in the broader field of Natural Language Processing (NLP), mainly due to their contradictory and metaphorical meaning content. The main FL expression forms are sarcasm, irony and metaphor. In the present paper we employ advanced Deep Learning (DL) methodologies to tackle the problem of identifying the aforementioned FL forms. Significantly extending our previous work [71], we propose a neural network methodology that builds on a recently proposed pre-trained transformer-based network architecture which, is further enhanced with the employment and devise of a recurrent convolutional neural network (RCNN). With this set-up, data preprocessing is kept in minimum. The performance of the devised hybrid neural architecture is tested on four benchmark datasets, and contrasted with other relevant state of the art methodologies and systems. Results demonstrate that the proposed methodology achieves state of the art performance under all benchmark datasets, outperforming, even by a large margin, all other methodologies and published studies. | {
"paragraphs": [
[
"In the networked-world era the production of (structured or unstructured) data is increasing with most of our knowledge being created and communicated via web-based social channels BIBREF1. Such data explosion raises the need for efficient and reliable solutions for the management, analysis and interpretation of huge data sizes. Analyzing and extracting knowledge from massive data collections is not only a big issue per-se, but also challenges the data analytics state-of-the-art BIBREF2, with statistical and machine learning methodologies paving the way, and deep learning (DL) taking over and presenting highly accurate solutions BIBREF3. Relevant applications in the field of social media cover a wide spectrum, from the categorization of major disasters BIBREF4 and the identification of suggestions BIBREF5 to inducing users’ appeal to political parties BIBREF6.",
"The raising of computational social science BIBREF7, and mainly its social media dimension BIBREF8, challenge contemporary computational linguistics and text-analytics endeavors. The challenge concerns the advancement of text analytics methodologies towards the transformation of unstructured excerpts into some kind of structured data via the identification of special passage characteristics, such as its emotional content (e.g., anger, joy, sadness) BIBREF9. In this context, Sentiment Analysis (SA) comes into play, targeting the devise and development of efficient algorithmic processes for the automatic extraction of a writer’s sentiment or emotion as conveyed in text excerpts. Relevant efforts focus on tracking the sentiment polarity of single utterances, which in most cases is loaded with a lot of subjectivity and a degree of vagueness BIBREF10. Contemporary research in the field utilizes data from social media resources (e.g., Facebook, Twitter) as well as other short text references in blogs, forums etc BIBREF11. However, users of social media tend to violate common grammar and vocabulary rules and even use various figurative language forms to communicate their message. In such situations, the sentiment inclination underlying the literal content of the conveyed concept may significantly differ from its figurative context, making SA tasks even more puzzling. Evidently, single turn text lack in detecting sentiment polarity on sarcastic and ironic expressions, as already signified in the relevant “SemEval-2014 Sentiment Analysis task 9” BIBREF12. Moreover, lacking of facial expressions and voice tone require context aware approaches to tackle such a challenging task and overcome its ambiguities BIBREF13. As sentiment is the emotion behind customer engagement, SA finds its realization in automated customer aware services, elaborating over user’s emotional intensities BIBREF14. Most of the related studies utilize single turn texts from topic specific sources, such as Twitter, Amazon, IMDB etc. Hand crafted and sentiment-oriented features, indicative of emotion polarity, are utilized to represent respective excerpt cases. The formed data are then fed traditional machine learning classifiers (e.g. SVM, Random Forest, multilayer perceptrons) or DL techniques and respective complex neural architectures, in order to induce analytical models that are able to capture the underlying sentiment content and polarity of passages BIBREF15, BIBREF16, BIBREF17.",
"The linguistic phenomenon of figurative language (FL) refers to the contradiction between the literal and the non-literal meaning of an utterance BIBREF18. Literal written language assigns ‘exact’ (or ‘real’) meaning to the used words (or phrases) without any reference to putative speech figures. In contrast, FL schemas exploit non-literal mentions that deviate from the exact concept presented by the used words and phrases. FL is rich of various linguistic phenomena like ‘metonymy’ reference to an entity stands for another of the same domain, a more general case of ‘synonymy’; and ‘metaphors’ systematic interchange between entities from different abstract domains BIBREF19. Besides the philosophical considerations, theories and debates about the exact nature of FL, findings from the neuroscience research domain present clear evidence on the presence of differentiating FL processing patterns in the human brain BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF14, even for woman-man attraction situations! BIBREF24. A fact that makes FL processing even more challenging and difficult to tackle. Indeed, this is the case of pragmatic FL phenomena like irony and sarcasm that main intention of in most of the cases, are characterized by an oppositeness to the literal language context. It is crucial to distinguish between the literal meaning of an expression considered as a whole from its constituents’ words and phrases. As literal meaning is assumed to be invariant in all context at least in its classical conceptualization BIBREF25, it is exactly this separation of an expression from its context that permits and opens the road to computational approaches in detecting and characterizing FL utterance.",
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. From this point of view, the interpretation and ultimately identification of the indirect meaning involved in a passage does not entail the cancellation of the indirectly rejected message and its replacement with the intentionally implied message (as advocated in BIBREF26, BIBREF27). On the contrary ironic/sarcastic expressions presupposes the processing of both the indirectly rejected and the implied message so that the difference between them can be identified. This view differs from the assumption that irony and sarcasm involve only one interpretation BIBREF28, BIBREF29. Holding that irony activates both grammatical / explicit as well as ironic / involved notions provides that irony will be more difficult to grasp than a non-ironic use of the same expression.",
"Despite that all forms of FL are well studied linguistic phenomena BIBREF28, computational approaches fail to identify the polarity of them within a text. The influence of FL in sentiment classification emerged both on SemEval-2014 Sentiment Analysis task BIBREF12 and BIBREF19. Results show that Natural Language Processing (NLP) systems effective in most other tasks see their performance drop when dealing with figurative forms of language. Thus, methods capable of detecting, separating and classifying forms of FL would be valuable building blocks for a system that could ultimately provide a full-spectrum sentiment analysis of natural language.",
"In literature we encounter some major drawbacks of previous studies and we aim to resolve with our proposed method:",
"Many studies tackle figurative language by utilizing a wide range of engineered features (e.g. lexical and sentiment based features) BIBREF30, BIBREF31, BIBREF0, BIBREF32, BIBREF33, BIBREF34 making classification frameworks not feasible.",
"Several approaches search words on large dictionaries which demand large computational times and can be considered as impractical BIBREF0, BIBREF34",
"Many studies exhausting preprocess the input texts, including stemming, tagging, emoji processing etc. that tend to be time consuming especially in large datasets BIBREF35, BIBREF36.",
"Many approaches attempt to create datasets using social media API’s to automatically collect data rather than exploiting their system on benchmark datasets, with proven quality. To this end, it is impossible to be compared and evaluated BIBREF35, BIBREF37, BIBREF36.",
"To tackle the aforementioned problems, we propose an end-to-end methodology containing none hand crafted engineered features or lexicon dictionaries, a preprocessing step that includes only de-capitalization and we evaluate our system on several benchmark dataset. To the best of our knowledge, this is the first time that an unsupervised pre-trained Transformer method is used to capture figurative language in many of its forms.",
"The rest of the paper is structured as follows, in Section SECREF2 we present the related work on the field of FL detection, in Section SECREF3 we present our proposed method along with several state-of-the-art models that achieve high performance in a wide range of NLP tasks which will be used to compare performance, the results of our experiments are presented in Section SECREF4, and finally our conclusion is in Section SECREF5."
],
[
"Although the NLP community have researched all aspects of FL independently, none of the proposed systems were evaluated on more than one type. Related work on FL detection and classification tasks could be categorized into two main categories, according to the studied task: (a) irony and sarcasm detection, and (b) sentiment analysis of FL excerpts. Even if sarcasm and irony are not identical phenomenons, we will present those types together, as they appear in the literature."
],
[
"Recently, the detection of ironic and sarcastic meanings from respective literal ones have raised scientific interest due to the intrinsic difficulties to differentiate between them. Apart from English language, irony and sarcasm detection have been widely explored on other languages as well, such as Italian BIBREF38, Japanese BIBREF39, Spanish BIBREF40, Greek BIBREF41 etc. In the review analysis that follows we group related approaches according to the their adopted key concepts to handle FL.",
"Approaches based on unexpectedness and contradictory factors. Reyes et al. BIBREF42, BIBREF43 were the first that attempted to capture irony and sarcasm in social media. They introduced the concepts of unexpectedness and contradiction that seems to be frequent in FL expressions. The unexpectedness factor was also adopted as a key concept in other studies as well. In particular, Barbieri et al. BIBREF44 compared tweets with sarcastic content with other topics such as, #politics, #education, #humor. The measure of unexpectedness was calculated using the American National Corpus Frequency Data source as well as the morphology of tweets, using Random Forests (RF) and Decision Trees (DT) classifiers. In the same direction, Buschmeir et al. BIBREF45 considered unexpectedness as an emotional imbalance between words in the text. Ghosh et al. BIBREF46 identified sarcasm using Support Vector Machines (SVM) using as features the identified contradictions within each tweet.",
"Content and context-based approaches. Inspired by the contradictory and unexpectedness concepts, follow-up approaches utilized features that expose information about the content of each passage including: N-gram patterns, acronyms and adverbs BIBREF47; semi-supervised attributes like word frequencies BIBREF48; statistical and semantic features BIBREF33; and Linguistic Inquiry and Word Count (LIWC) dictionary along with syntactic and psycho-linguistic features BIBREF49. LIWC corpus BIBREF50 was also utilized in BIBREF31, comparing sarcastic tweets with positive and negative ones using an SVM classifier. Similarly, using several lexical resources BIBREF34, and syntactic and sentiment related features BIBREF37, the respective researchers explored differences between sarcastic and ironic expressions. Affective and structural features are also employed to predict irony with conventional machine learning classifiers (DT, SVM, Naïve Bayes/NB) in BIBREF51. In a follow-up study BIBREF30, a knowledge-based k-NN classifier was fed with a feature set that captures a wide range of linguistic phenomena (e.g., structural, emotional). Significant results were achieved in BIBREF36, were a combination of lexical, semantic and syntactic features passed through an SVM classifier that outperformed LSTM deep neural network approaches. Apart from local content, several approaches claimed that global context may be essential to capture FL phenomena. In particular, in BIBREF52 it is claimed that capturing previous and following comments on Reddit increases classification performance. Users’ behavioral information seems to be also beneficial as it captures useful contextual information in Twitter post BIBREF32. A novel unsupervised probabilistic modeling approach to detect irony was also introduced in BIBREF53.",
"Deep Learning approaches. Although several DL methodologies, such as recurrent neural networks (RNNs), are able to capture hidden dependencies between terms within text passages and can be considered as content-based, we grouped all DL studies for readability purposes. Word Embeddings, i.e., learned mappings of words to real valued vectors BIBREF54, play a key role in the success of RNNs and other DL neural architectures that utilize pre-trained word embeddings to tackle FL. In fact, the combination of word embeddings with Convolutional Neural Networks (CNN), so called CNN-LSTM units, was introduced by Kumar BIBREF55 and Ghosh & Veale BIBREF56 achieving state-of-the-art performance. Attentive RNNs exhibit also good performance when matched with pre-trained Word2Vec embeddings BIBREF57, and contextual information BIBREF58. Following the same approach an LSTM based intra-attention was introduced in BIBREF59 that achieved increased performance. A different approach, founded on the claim that number present significant indicators, was introduced by Dubey et al. BIBREF60. Using an attentive CNN on a dataset with sarcastic tweets that contain numbers, showed notable results. An ensemble of a shallow classifier with lexical, pragmatic and semantic features, utilizing a Bidirectional LSTM model is presented in BIBREF61. In a subsequent study BIBREF35, the researchers engineered a soft attention LSTM model coupled with a CNN. Contextual DL approaches are also employed, utilizing pre-trained along with user embeddings structured from previous posts BIBREF62 or, personality embeddings passed through CNNs BIBREF63. ELMo embeddings BIBREF64 are utilized in BIBREF65. In our previous approach we implemented an ensemble deep learning classifier (DESC) BIBREF0, capturing content and semantic information. In particular, we employed an extensive feature set of a total 44 features leveraging syntactic, demonstrative, sentiment and readability information from each text along with Tf-idf features. In addition, an attentive bidirectional LSTM model trained with GloVe pre-trained word embeddings was utilized to structure an ensemble classifier processing different text representations. DESC model performed state-of-the-art results on several FL tasks."
],
[
"The Semantic Evaluation Workshop-2015 BIBREF66 proposed a joint task to evaluate the impact of FL in sentiment analysis on ironic, sarcastic and metaphorical tweets, with a number of submissions achieving highly performance results. The ClaC team BIBREF67 exploited four lexicons to extract attributes as well as syntactic features to identify sentiment polarity. The UPF team BIBREF68 introduced a regression classification methodology on tweet features extracted with the use of the widely utilized SentiWordNet and DepecheMood lexicons. The LLT-PolyU team BIBREF69 used semi-supervised regression and decision trees on extracted uni-gram and bi-gram features, coupled with features that capture potential contradictions at short distances. An SVM-based classifier on extracted n-gram and Tf-idf features was used by the Elirf team BIBREF70 coupled with specific lexicons such as Affin, Patter and Jeffrey 10. Finally, the LT3 team BIBREF71 used an ensemble Regression and SVM semi-supervised classifier with lexical features extracted with the use of WordNet and DBpedia11."
],
[
"Due to the limitations of annotated datasets and the high cost of data collection, unsupervised learning approaches tend to be an easier way towards training networks. Recently, transfer learning approaches, i.e., the transfer of already acquired knowledge to new conditions, are gaining attention in several domain adaptation problems BIBREF72. In fact, pre-trained embeddings representations, such as GloVe, ElMo and USE, coupled with transfer learning architectures were introduced and managed to achieve state-of-the-art results on various NLP tasks BIBREF73. In this chapter we review on these methodologies in order to introduce our approach. In this chapter we will summarize those methods and introduce our proposed transfer learning system. Model specifications used for the state-of-the-art models compared can be found in Appendix SECREF6."
],
[
"Pre-trained word embeddings proved to increase classification performances in many NLP tasks. In particular, Global Vectors (GloVe) BIBREF74 and Word2Vec BIBREF75 became popular in various tasks due to their ability to capture representative semantic representations of words, trained on large amount of data. However, in various studies (e.g., BIBREF76, BIBREF64, BIBREF77) it is argued that the actual meaning of words along with their semantics representations varies according to their context. Following this assumption, researchers in BIBREF64 present an approach that is based on the creation of pre-trained word embeddings through building a bidirectional Language model, i.e. predicting next word within a sequence. The ELMo model was exhaustingly trained on 30 million sentences corpus BIBREF78, with a two layered bidirectional LSTM architecture, aiming to predict both next and previous words, introducing the concept of contextual embeddings. The final embeddings vector is produced by a task specific weighted sum of the two directional hidden layers of LSTM models. Another contextual approach for creating embedding vector representations is proposed in BIBREF79 where, complete sentences, instead of words, are mapped to a latent vector space. The approach provides two variations of Universal Sentence Encoder (USE) with some trade-offs in computation and accuracy. The first approach consists of a computationally intensive transformer that resembles a transformer network BIBREF80, proved to achieve higher performance figures. In contrast, the second approach provides a light-weight model that averages input embedding weights for words and bi-grams by utilizing of a Deep Average Network (DAN) BIBREF81. The output of the DAN is passed through a feedforward neural network in order to produce the sentence embeddings. Both approaches take as input lowercased PTB tokenized strings, and output a 512-dimensional sentence embedding vectors."
],
[
"Sequence-to-sequence (seq2seq) methods using encoder-decoder schemes are a popular choice for several tasks such as Machine Translation, Text Summarization, Question Answering etc. BIBREF82. However, encoder’s contextual representations are uncertain when dealing with long-range dependencies. To address these drawbacks, Vaswani et al. in BIBREF80 introduced a novel network architecture, called Transformer, relying entirely on self-attention units to map input sequences to output sequences without the use of RNNs. The Transformer’s decoder unit architecture contains a masked multi-head attention layer followed by a multi-head attention unit and a feed forward network whereas the decoder unit is almost identical without the masked attention unit. Multi-head self-attention layers are calculated in parallel facing the computational costs of regular attention layers used by previous seq2seq network architectures. In BIBREF18 the authors presented a model that is founded on findings from various previous studies (e.g., BIBREF83, BIBREF84, BIBREF64, BIBREF49, BIBREF80), which achieved state-of-the-art results on eleven NLP tasks, called BERT - Bidirectional Encoder Representations from Transformers. The BERT training process is split in two phases, the unsupervised pre-training phase and the fine-tuning phase using labelled data for down-streaming tasks. In contrast with previous proposed models (e.g., BIBREF64, BIBREF49), BERT uses masked language models (MLMs) to enable pre-trained deep bidirectional representations. In the pre-training phase the model is trained with a large amount of unlabeled data from Wikipedia, BookCorpus BIBREF85 and WordPiece BIBREF86 embeddings. In this training part, the model was tested on two tasks; on the first task, the model randomly masks 15% of the input tokens aiming to capture conceptual representations of word sequences by predicting masked words inside the corpus, whereas in the second task the model is given two sentences and tries to predict whether the second sentence is the next sentence of the first. In the second phase, BERT is extended with a task-related classifier model that is trained on a supervised manner. During this supervised phase, the pre-trained BERT model receives minimal changes, with the classifier’s parameters trained in order to minimize the loss function. Two models presented in BIBREF18, a “Base Bert” model with 12 encoder layers (i.e. transformer blocks), feed-forward networks with 768 hidden units and 12 attention heads, and a “Large Bert” model with 24 encoder layers 1024 feed-the pre-trained Bert model, an architecture almost identical with the aforementioned Transformer network. A [CLS] token is supplied in the input as the first token, the final hidden state of which is aggregated for classification tasks. Despite the achieved breakthroughs, the BERT model suffers from several drawbacks. Firstly, BERT, as all language models using Transformers, assumes (and pre-supposes) independence between the masked words from the input sequence, and neglects all the positional and dependency information between words. In other words, for the prediction of a masked token both word and position embeddings are masked out, even if positional information is a key-aspect of NLP BIBREF87. In addition, the [MASK] token which, is substituted with masked words, is mostly absent in fine-tuning phase for down-streaming tasks, leading to a pre-training fine-turning discrepancy. To address the cons of BERT, a permutation language model was introduced, so-called XLnet, trained to predict masked tokens in a non-sequential random order, factorizing likelihood in an autoregressive manner without the independence assumption and without relying on any input corruption BIBREF88. In particular, a query stream is used that extends embedding representations to incorporate positional information about the masked words. The original representation set (content stream), including both token and positional embeddings, is then used as input to the query stream following a scheme called “Two-Stream SelfAttention”. To overcome the problem of slow convergence the authors propose the prediction of the last token in the permutation phase, instead of predicting the entire sequence. Finally, XLnet uses also a special token for the classification and separation of the input sequence, [CLS] and [SEP] respectively, however it also learns an embedding that denotes whether the two words are from the same segment. This is similar to relative positional encodings introduced in TrasformerXL BIBREF87, and extents the ability of XLnet to cope with tasks that encompass arbitrary input segments. Recently, a replication study, BIBREF18, suggested several modifications in the training procedure of BERT which, outperforms the original XLNet architecture on several NLP tasks. The optimized model, called Robustly Optimized BERT Approach (RoBERTa), used 10 times more data (160GB compared with the 16GB originally exploited), and is trained with far more epochs than the BERT model (500K vs 100K), using also 8-times larger batch sizes, and a byte-level BPE vocabulary instead of the character-level vocabulary that was previously utilized. Another significant modification, was the dynamic masking technique instead of the single static mask used in BERT. In addition, RoBERTa model removes the next sentence prediction objective used in BERT, following advises by several other studies that question the NSP loss term BIBREF89, BIBREF90, BIBREF91."
],
[
"The intuition behind our proposed RCNN-RoBERTa approach is founded on the following observation: as pre-trained networks are beneficial for several down-streaming tasks, their outputs could be further enhanced if processed properly by other networks. Towards this end, we devised an end-to-end model with minimum training time that utilizes pre-trained RoBERTa weights combined with a RCNN in order to capture contextual information. Actually, the proposed leaning model is based on a hybrid DL neural architecture that utilizes pre-trained transformer models and feed the hidden representations of the transformer into a Recurrent Convolutional Neural Network (RCNN), similar to BIBREF92. In particular, we employed the RoBERTa base model with 12 hidden states and 12 attention heads, and used its output hidden states as an embedding layer to a RCNN. As already stated, contradictions and long-time dependencies within a sentence may be used as strong identifiers of FL expressions. RNNs are often used to capture time relationships between words, however they are strongly biased, i.e. later words are tending to be more dominant that previous ones BIBREF92. This problem can be alleviated with CNNs, which, as unbiased models, can determine semantic relationships between words with max-pooling. Nevertheless, contextual information in CNNs is depended totally on kernel sizes. Thus, we appropriately modified the RCNN model presented in BIBREF92 in order to capture unbiased recurrent informative relationships within text, and we implemented a Bidirectional LSTM (BiLSTM) layer, which is fed with RoBERTa’s final hidden layer weights. The output of LSTM is concatenated with the embedded weights, and passed through a feedforward network and a max-pooling layer. Finally, softmax function is used for the output layer. Table TABREF12 shows the parameters used in training and Figure FIGREF13 demonstrates our method."
],
[
"To assess the performance of the proposed method we performed an exhaustive comparison with several advanced state-of-the-art methodologies along with published results. The used methodologies were appropriately implemented using the available codes and guidelines, and include: ELMo BIBREF64, USE BIBREF79, NBSVM BIBREF93, FastText BIBREF94, XLnet base cased model (XLnet) BIBREF88, BERT BIBREF18 in two setups: BERT base cased (BERT-Cased) and BERT base uncased (BERT-Uncased) models, and RoBERTa base model. The published results were acquired from the respective original publication (the reference publication is indicated in the respective tables). For the comparison we utilized benchmark datasets that include ironic, sarcastic and metaphoric expressions. Namely, we used the dataset provided in “Semantic Evaluation Workshop Task 3” (SemEval-2018) that contains ironic tweets BIBREF95; Riloff’s high quality sarcastic unbalanced dataset BIBREF96; a large dataset containing political comments from Reddit BIBREF97; and a SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” BIBREF66. All datasets are used in a binary classification manner (i.e., irony/sarcasm vs. literal), except from the “SemEval-2015 Task 11” dataset where the task is to predict a sentiment integer score (from -5 to 5) for each tweet (refer to BIBREF0 for more details). The evaluation was made across standard five metrics namely, Accuracy (Acc), Precision (Pre), Recall (Rec), F1-score (F1), and Area Under the Receiver Operating Characteristics Curve (AUC). For the SA task the cosine similarity metric (Cos) and mean squared error (MSE) metrics are used, as proposed in the original study BIBREF66.",
"The results are summarized in the tables TABREF14-TABREF17; each table refers to the respective comparison study. All tables present the performance results of our proposed method (“Proposed”) and contrast them to eight state-of-the-art baseline methodologies along with published results using the same dataset. Specifically, Table TABREF14 presents the results obtained using the ironic dataset used in SemEval-2018 Task 3.A, compared with recently published studies and two high performing teams from the respective SemEval shared task BIBREF98, BIBREF99. Tables TABREF15,TABREF16 summarize results obtained using Sarcastic datasets (Reddit SARC politics BIBREF97 and Riloff Twitter BIBREF96). Finally, Table TABREF17 compares the results from baseline models, from top two ranked task participants BIBREF68, BIBREF67, from our previous study with the DESC methodology BIBREF0 with the proposed RCNN-RoBERTa framework on a Sentiment Analysis task with figurative language, using the SemEval 2015 Task 11 dataset.",
"As it can be easily observed, the proposed RCNN-RoBERTa approach outperforms all approaches as well as all methods with published results, for the respective binary classification tasks (Tables TABREF14, TABREF15, and TABREF16). Our previous approach, DESC (introduced in BIBREF0), performs slightly better in terms of cosine similarity for the sentiment scoring task (Table TABREF17, 0,820 vs. 0,810), with the RCNN-RoBERTa approach to perform better and managing to significantly improve the MSE measure by almost 33.5% (2,480 vs. 1,450)."
],
[
"In this study, we propose the first transformer based methodology, leveraging the pre-trained RoBERTa model combined with a recurrent convolutional neural network, to tackle figurative language in social media. Our network is compared with all, to the best of our knowledge, published approaches under four different benchmark dataset. In addition, we aim to minimize preprocessing and engineered feature extraction steps which are, as we claim, unnecessary when using overly trained deep learning methods such as transformers. In fact, hand crafted features along with preprocessing techniques such as stemming and tagging on huge datasets containing thousands of samples are almost prohibited in terms of their computation cost. Our proposed model, RCNN-RoBERTa, achieve state-of-the-art performance under six metrics over four benchmark dataset, denoting that transfer learning non-literal forms of language. Moreover, RCNN-RoBERTa model outperforms all other state-of-the-art approaches tested including BERT, XLnet, ELMo, and USE under all metric, some by a large factor."
],
[
"In our experiments we compared our model with several seven different classifiers under different settings. For the ELMo system we used the mean-pooling of all contextualized word representations, i.e. character-based embedding representations and the output of the two layer LSTM resulting with a 1024 dimensional vector, and passed it through two deep dense ReLu activated layers with 256 and 64 units. Similarly, USE embeddings are trained with a Transformer encoder and output 512 dimensional vector for each sample, which is also passed through through two deep dense ReLu activated layers with 256 and 64 units. Both ELMo and USE embeddings retrieved from TensorFlow Hub. NBSVM system was modified according to BIBREF93 and trained with a ${10^{-3}}$ leaning rate for 5 epochs with Adam optimizer BIBREF100. FastText system was implemented by utilizing pre-trained embeddings BIBREF94 passed through a global max-pooling and a 64 unit fully connected layer. System was trained with Adam optimizer with learning rate ${0.1}$ for 3 epochs. XLnet model implemented using the base-cased model with 12 layers, 768 hidden units and 12 attention heads. Model trained with learning rate ${4 \\times 10^{-5}}$ using ${10^{-5}}$ weight decay for 3 epochs. We exploited both cased and uncased BERT-base models containing 12 layers, 768 hidden units and 12 attention heads. We trained models for 3 epochs with learning rate ${2 \\times 10^{-5}}$ using ${10^{-5}}$ weight decay. We trained RoBERTa model following the setting of BERT model. RoBERTa, XLnet and BERT models implemented using pytorch-transformers library ."
]
],
"section_name": [
"Introduction",
"Literature Review",
"Literature Review ::: Irony and Sarcasm Detection",
"Literature Review ::: Sentiment Analysis on Figurative Language",
"Methodology: A hybrid Recurrent Convolution Transformer Approach ::: The background: Transfer Learning",
"Methodology: A hybrid Recurrent Convolution Transformer Approach ::: The background: Transfer Learning ::: Contextual Embeddings",
"Methodology: A hybrid Recurrent Convolution Transformer Approach ::: The background: Transfer Learning ::: Transformer Methods",
"Methodology: A hybrid Recurrent Convolution Transformer Approach ::: Proposed Method - Recurrent CNN RoBERTA (RCNN-RoBERTa)",
"Experimental Results",
"Conclusion",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"073ad6175cd325bfc075334a34ed5223b71bf6ae",
"adac12da2f5ab6283ef1cd2d3cc5aa1c7601ddb7"
],
"answer": [
{
"evidence": [
"To assess the performance of the proposed method we performed an exhaustive comparison with several advanced state-of-the-art methodologies along with published results. The used methodologies were appropriately implemented using the available codes and guidelines, and include: ELMo BIBREF64, USE BIBREF79, NBSVM BIBREF93, FastText BIBREF94, XLnet base cased model (XLnet) BIBREF88, BERT BIBREF18 in two setups: BERT base cased (BERT-Cased) and BERT base uncased (BERT-Uncased) models, and RoBERTa base model. The published results were acquired from the respective original publication (the reference publication is indicated in the respective tables). For the comparison we utilized benchmark datasets that include ironic, sarcastic and metaphoric expressions. Namely, we used the dataset provided in “Semantic Evaluation Workshop Task 3” (SemEval-2018) that contains ironic tweets BIBREF95; Riloff’s high quality sarcastic unbalanced dataset BIBREF96; a large dataset containing political comments from Reddit BIBREF97; and a SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” BIBREF66. All datasets are used in a binary classification manner (i.e., irony/sarcasm vs. literal), except from the “SemEval-2015 Task 11” dataset where the task is to predict a sentiment integer score (from -5 to 5) for each tweet (refer to BIBREF0 for more details). The evaluation was made across standard five metrics namely, Accuracy (Acc), Precision (Pre), Recall (Rec), F1-score (F1), and Area Under the Receiver Operating Characteristics Curve (AUC). For the SA task the cosine similarity metric (Cos) and mean squared error (MSE) metrics are used, as proposed in the original study BIBREF66."
],
"extractive_spans": [
"ELMo",
"USE",
"NBSVM",
"FastText",
"XLnet base cased model (XLnet)",
"BERT base cased (BERT-Cased)",
"BERT base uncased (BERT-Uncased)",
"RoBERTa base model"
],
"free_form_answer": "",
"highlighted_evidence": [
"The used methodologies were appropriately implemented using the available codes and guidelines, and include: ELMo BIBREF64, USE BIBREF79, NBSVM BIBREF93, FastText BIBREF94, XLnet base cased model (XLnet) BIBREF88, BERT BIBREF18 in two setups: BERT base cased (BERT-Cased) and BERT base uncased (BERT-Uncased) models, and RoBERTa base model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To assess the performance of the proposed method we performed an exhaustive comparison with several advanced state-of-the-art methodologies along with published results. The used methodologies were appropriately implemented using the available codes and guidelines, and include: ELMo BIBREF64, USE BIBREF79, NBSVM BIBREF93, FastText BIBREF94, XLnet base cased model (XLnet) BIBREF88, BERT BIBREF18 in two setups: BERT base cased (BERT-Cased) and BERT base uncased (BERT-Uncased) models, and RoBERTa base model. The published results were acquired from the respective original publication (the reference publication is indicated in the respective tables). For the comparison we utilized benchmark datasets that include ironic, sarcastic and metaphoric expressions. Namely, we used the dataset provided in “Semantic Evaluation Workshop Task 3” (SemEval-2018) that contains ironic tweets BIBREF95; Riloff’s high quality sarcastic unbalanced dataset BIBREF96; a large dataset containing political comments from Reddit BIBREF97; and a SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” BIBREF66. All datasets are used in a binary classification manner (i.e., irony/sarcasm vs. literal), except from the “SemEval-2015 Task 11” dataset where the task is to predict a sentiment integer score (from -5 to 5) for each tweet (refer to BIBREF0 for more details). The evaluation was made across standard five metrics namely, Accuracy (Acc), Precision (Pre), Recall (Rec), F1-score (F1), and Area Under the Receiver Operating Characteristics Curve (AUC). For the SA task the cosine similarity metric (Cos) and mean squared error (MSE) metrics are used, as proposed in the original study BIBREF66."
],
"extractive_spans": [
"ELMo",
"USE ",
"NBSVM ",
"FastText ",
"XLnet base cased model (XLnet",
"BERT base cased (BERT-Cased) ",
"BERT base uncased (BERT-Uncased)",
"RoBERTa "
],
"free_form_answer": "",
"highlighted_evidence": [
"To assess the performance of the proposed method we performed an exhaustive comparison with several advanced state-of-the-art methodologies along with published results. The used methodologies were appropriately implemented using the available codes and guidelines, and include: ELMo BIBREF64, USE BIBREF79, NBSVM BIBREF93, FastText BIBREF94, XLnet base cased model (XLnet) BIBREF88, BERT BIBREF18 in two setups: BERT base cased (BERT-Cased) and BERT base uncased (BERT-Uncased) models, and RoBERTa base model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1bcfbbdd061fa386c7604d2c7c7b94e23d63eb0a",
"c8908ad997aa5208e8f8ece8717042fac3fb0fec"
],
"answer": [
{
"evidence": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. From this point of view, the interpretation and ultimately identification of the indirect meaning involved in a passage does not entail the cancellation of the indirectly rejected message and its replacement with the intentionally implied message (as advocated in BIBREF26, BIBREF27). On the contrary ironic/sarcastic expressions presupposes the processing of both the indirectly rejected and the implied message so that the difference between them can be identified. This view differs from the assumption that irony and sarcasm involve only one interpretation BIBREF28, BIBREF29. Holding that irony activates both grammatical / explicit as well as ironic / involved notions provides that irony will be more difficult to grasp than a non-ironic use of the same expression."
],
"extractive_spans": [],
"free_form_answer": "Irony, sarcasm and metaphor are figurative language form. Irony and sarcasm are considered as a way of indirect denial.",
"highlighted_evidence": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. From this point of view, the interpretation and ultimately identification of the indirect meaning involved in a passage does not entail the cancellation of the indirectly rejected message and its replacement with the intentionally implied message (as advocated in BIBREF26, BIBREF27). On the contrary ironic/sarcastic expressions presupposes the processing of both the indirectly rejected and the implied message so that the difference between them can be identified. This view differs from the assumption that irony and sarcasm involve only one interpretation BIBREF28, BIBREF29. Holding that irony activates both grammatical / explicit as well as ironic / involved notions provides that irony will be more difficult to grasp than a non-ironic use of the same expression."
],
"extractive_spans": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial."
],
"free_form_answer": "",
"highlighted_evidence": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. From this point of view, the interpretation and ultimately identification of the indirect meaning involved in a passage does not entail the cancellation of the indirectly rejected message and its replacement with the intentionally implied message (as advocated in BIBREF26, BIBREF27). On the contrary ironic/sarcastic expressions presupposes the processing of both the indirectly rejected and the implied message so that the difference between them can be identified. This view differs from the assumption that irony and sarcasm involve only one interpretation BIBREF28, BIBREF29. Holding that irony activates both grammatical / explicit as well as ironic / involved notions provides that irony will be more difficult to grasp than a non-ironic use of the same expression."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"12313f64df6cce25bc917c2309987417fec56201",
"964b387478217eb678ac887fb7143ed38a8dcbc0"
],
"answer": [
{
"evidence": [
"To assess the performance of the proposed method we performed an exhaustive comparison with several advanced state-of-the-art methodologies along with published results. The used methodologies were appropriately implemented using the available codes and guidelines, and include: ELMo BIBREF64, USE BIBREF79, NBSVM BIBREF93, FastText BIBREF94, XLnet base cased model (XLnet) BIBREF88, BERT BIBREF18 in two setups: BERT base cased (BERT-Cased) and BERT base uncased (BERT-Uncased) models, and RoBERTa base model. The published results were acquired from the respective original publication (the reference publication is indicated in the respective tables). For the comparison we utilized benchmark datasets that include ironic, sarcastic and metaphoric expressions. Namely, we used the dataset provided in “Semantic Evaluation Workshop Task 3” (SemEval-2018) that contains ironic tweets BIBREF95; Riloff’s high quality sarcastic unbalanced dataset BIBREF96; a large dataset containing political comments from Reddit BIBREF97; and a SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” BIBREF66. All datasets are used in a binary classification manner (i.e., irony/sarcasm vs. literal), except from the “SemEval-2015 Task 11” dataset where the task is to predict a sentiment integer score (from -5 to 5) for each tweet (refer to BIBREF0 for more details). The evaluation was made across standard five metrics namely, Accuracy (Acc), Precision (Pre), Recall (Rec), F1-score (F1), and Area Under the Receiver Operating Characteristics Curve (AUC). For the SA task the cosine similarity metric (Cos) and mean squared error (MSE) metrics are used, as proposed in the original study BIBREF66."
],
"extractive_spans": [
"SemEval-2018",
" Riloff’s high quality sarcastic unbalanced dataset",
" a large dataset containing political comments from Reddit",
"SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” "
],
"free_form_answer": "",
"highlighted_evidence": [
" For the comparison we utilized benchmark datasets that include ironic, sarcastic and metaphoric expressions. Namely, we used the dataset provided in “Semantic Evaluation Workshop Task 3” (SemEval-2018) that contains ironic tweets BIBREF95; Riloff’s high quality sarcastic unbalanced dataset BIBREF96; a large dataset containing political comments from Reddit BIBREF97; and a SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” BIBREF66. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To assess the performance of the proposed method we performed an exhaustive comparison with several advanced state-of-the-art methodologies along with published results. The used methodologies were appropriately implemented using the available codes and guidelines, and include: ELMo BIBREF64, USE BIBREF79, NBSVM BIBREF93, FastText BIBREF94, XLnet base cased model (XLnet) BIBREF88, BERT BIBREF18 in two setups: BERT base cased (BERT-Cased) and BERT base uncased (BERT-Uncased) models, and RoBERTa base model. The published results were acquired from the respective original publication (the reference publication is indicated in the respective tables). For the comparison we utilized benchmark datasets that include ironic, sarcastic and metaphoric expressions. Namely, we used the dataset provided in “Semantic Evaluation Workshop Task 3” (SemEval-2018) that contains ironic tweets BIBREF95; Riloff’s high quality sarcastic unbalanced dataset BIBREF96; a large dataset containing political comments from Reddit BIBREF97; and a SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” BIBREF66. All datasets are used in a binary classification manner (i.e., irony/sarcasm vs. literal), except from the “SemEval-2015 Task 11” dataset where the task is to predict a sentiment integer score (from -5 to 5) for each tweet (refer to BIBREF0 for more details). The evaluation was made across standard five metrics namely, Accuracy (Acc), Precision (Pre), Recall (Rec), F1-score (F1), and Area Under the Receiver Operating Characteristics Curve (AUC). For the SA task the cosine similarity metric (Cos) and mean squared error (MSE) metrics are used, as proposed in the original study BIBREF66."
],
"extractive_spans": [
"dataset provided in “Semantic Evaluation Workshop Task 3”",
" ironic tweets BIBREF95",
"Riloff’s high quality sarcastic unbalanced dataset BIBREF96",
" a large dataset containing political comments from Reddit BIBREF97",
"SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” BIBREF66"
],
"free_form_answer": "",
"highlighted_evidence": [
"Namely, we used the dataset provided in “Semantic Evaluation Workshop Task 3” (SemEval-2018) that contains ironic tweets BIBREF95; Riloff’s high quality sarcastic unbalanced dataset BIBREF96; a large dataset containing political comments from Reddit BIBREF97; and a SA dataset that contains tweets with various FL forms from “SemEval-2015 Task 11” BIBREF66. All datasets are used in a binary classification manner (i.e., irony/sarcasm vs. literal), except from the “SemEval-2015 Task 11” dataset where the task is to predict a sentiment integer score (from -5 to 5) for each tweet (refer to BIBREF0 for more details)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"af95b6d3910d75a1a7f0b13276c4a15395dac55b",
"60c2f95bc9b313c348f67a74ff2da13530b8c00a"
],
"answer": [
{
"evidence": [
"Although the NLP community have researched all aspects of FL independently, none of the proposed systems were evaluated on more than one type. Related work on FL detection and classification tasks could be categorized into two main categories, according to the studied task: (a) irony and sarcasm detection, and (b) sentiment analysis of FL excerpts. Even if sarcasm and irony are not identical phenomenons, we will present those types together, as they appear in the literature."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" Even if sarcasm and irony are not identical phenomenons, we will present those types together, as they appear in the literature."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. From this point of view, the interpretation and ultimately identification of the indirect meaning involved in a passage does not entail the cancellation of the indirectly rejected message and its replacement with the intentionally implied message (as advocated in BIBREF26, BIBREF27). On the contrary ironic/sarcastic expressions presupposes the processing of both the indirectly rejected and the implied message so that the difference between them can be identified. This view differs from the assumption that irony and sarcasm involve only one interpretation BIBREF28, BIBREF29. Holding that irony activates both grammatical / explicit as well as ironic / involved notions provides that irony will be more difficult to grasp than a non-ironic use of the same expression."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. From this point of view, the interpretation and ultimately identification of the indirect meaning involved in a passage does not entail the cancellation of the indirectly rejected message and its replacement with the intentionally implied message (as advocated in BIBREF26, BIBREF27). On the contrary ironic/sarcastic expressions presupposes the processing of both the indirectly rejected and the implied message so that the difference between them can be identified. This view differs from the assumption that irony and sarcasm involve only one interpretation BIBREF28, BIBREF29. Holding that irony activates both grammatical / explicit as well as ironic / involved notions provides that irony will be more difficult to grasp than a non-ironic use of the same expression."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d77a4c1443fc803a4daf4b69bcbdeb30d6559652",
"f25140374ee5dd0df9a5d8baa590201a065ce382"
],
"answer": [
{
"evidence": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. From this point of view, the interpretation and ultimately identification of the indirect meaning involved in a passage does not entail the cancellation of the indirectly rejected message and its replacement with the intentionally implied message (as advocated in BIBREF26, BIBREF27). On the contrary ironic/sarcastic expressions presupposes the processing of both the indirectly rejected and the implied message so that the difference between them can be identified. This view differs from the assumption that irony and sarcasm involve only one interpretation BIBREF28, BIBREF29. Holding that irony activates both grammatical / explicit as well as ironic / involved notions provides that irony will be more difficult to grasp than a non-ironic use of the same expression."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We may identify three common FL expression forms namely, irony, sarcasm and metaphor. In this paper, figurative expressions, and especially ironic or sarcastic ones, are considered as a way of indirect denial. From this point of view, the interpretation and ultimately identification of the indirect meaning involved in a passage does not entail the cancellation of the indirectly rejected message and its replacement with the intentionally implied message (as advocated in BIBREF26, BIBREF27). On the contrary ironic/sarcastic expressions presupposes the processing of both the indirectly rejected and the implied message so that the difference between them can be identified. This view differs from the assumption that irony and sarcasm involve only one interpretation BIBREF28, BIBREF29. Holding that irony activates both grammatical / explicit as well as ironic / involved notions provides that irony will be more difficult to grasp than a non-ironic use of the same expression."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What are the baseline models?",
"How are the three different forms defined in this work?",
"What datasets are used for training and testing?",
"Does approach handle overlapping forms (e.g., metaphor and irony)?",
"Does this work differentiate metaphor(technique) from irony and sarcasm (purpose)? "
],
"question_id": [
"8e113fd9661bc8af97e30c75a20712f01fc4520a",
"35e0e6f89b010f34cfb69309b85db524a419c862",
"992e67f706c728bc0e534f974c1656da10e7a724",
"61e96abdc924c34c6b82a587168ea3d14fe792d1",
"ee8a77cddbe492c686f5af3923ad09d401a741b5"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1 Selected hyperparameters used in our proposed method RCNN-RoBERTa. The hyperparameters where selected using grid-search on a 5-fold validation.",
"Table 2 Comparison of RCNN-RoBERTa with state-ofthe-art neural network classifiers and published results on SemEval-2018 dataset; bold figures indicate superior performance.",
"Table 3 Comparison of RCNN-RoBERTa with state-of-theart neural network classifiers and published results on Reddit Politics dataset.",
"Table 4 Comparison of RCNN-RoBERTa with state-of-theart neural network classifiers and published results on on Sarcastic Rillofs dataset.",
"Table 5 Comparison of RCNN-RoBERTa with state-of-theart neural network classifiers and published results on Task11 - SemEval-2015 dataset (sentiment analysis of figurative language expression)."
],
"file": [
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png"
]
} | [
"How are the three different forms defined in this work?"
] | [
[
"1911.10401-Introduction-3"
]
] | [
"Irony, sarcasm and metaphor are figurative language form. Irony and sarcasm are considered as a way of indirect denial."
] | 264 |
1911.03854 | r/Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection | Fake news has altered society in negative ways as evidenced in politics and culture. It has adversely affected both online social network systems as well as offline communities and conversations. Using automatic fake news detection algorithms is an efficient way to combat the rampant dissemination of fake news. However, using an effective dataset has been a problem for fake news research and detection model development. In this paper, we present Fakeddit, a novel dataset consisting of about 800,000 samples from multiple categories of fake news. Each sample is labeled according to 2-way, 3-way, and 5-way classification categories. Prior fake news datasets do not provide multimodal text and image data, metadata, comment data, and fine-grained fake news categorization at this scale and breadth. We construct hybrid text+image models and perform extensive experiments for multiple variations of classification. | {
"paragraphs": [
[
"Within our progressively digitized society, the spread of fake news and misinformation has enlarged, leading to many problems such as an increasingly politically divisive climate. The dissemination and consequences of fake news are exacerbating partly due to the rise of popular social media applications with inadequate fact-checking or third-party filtering, enabling any individual to broadcast fake news easily and at a large scale BIBREF0. Though steps have been taken to detect and eliminate fake news, it still poses a dire threat to society BIBREF1. As such, research in the area of fake news detection is essential.",
"To build any machine learning model, one must obtain good training data for the specified task. In the realm of fake news detection, there are several existing published datasets. However, they have several limitations: limited size, modality, and/or granularity. Though fake news may immediately be thought of as taking the form of text, it can appear in other mediums such as images. As such, it is important that standard fake news detection systems detect all types of fake news and not just text data. Our dataset will expand fake news research into the multimodal space and allow researchers to develop stronger fake news detection systems.",
"Our contributions to the study of fake news detection are:",
"We create a large-scale multimodal fake news dataset consisting of around 800,000 samples containing text, image, metadata, and comments data from a highly diverse set of resources.",
"Each data sample consists of multiple labels, allowing users to utilize the dataset for 2-way, 3-way, and 5-way classification. This enables both high-level and fine-grained fake news classification.",
"We evaluate our dataset through text, image, and text+image modes with a neural network architecture that integrates both the image and text data. We run experiments for several types of models, providing a comprehensive overview of classification results."
],
[
"A variety of datasets for fake news detection have been published in recent years. These are listed in Table TABREF1, along with their specific characteristics. When comparing these datasets, a few trends can be seen. Most of the datasets are small in size, which can be ineffective for current machine learning models that require large quantities of training data. Only four contain over half a million samples, with CREDBANK and FakeNewsCorpus being the largest with millions of samples BIBREF2. In addition, many of the datasets separate their data into a small number of classes, such as fake vs. true. However, fake news can be categorized into many different types BIBREF3. Datasets such as NELA-GT-2018, LIAR, and FakeNewsCorpus provide more fine-grained labels BIBREF4, BIBREF5. While some datasets include data from a variety of categories BIBREF6, BIBREF7, many contain data from specific areas, such as politics and celebrity gossip BIBREF8, BIBREF9, BIBREF10, BIBREF11. These data samples may contain limited styles of writing due to this categorization. Finally, most of the existing fake news datasets collect only text data, which is not the only mode that fake news can appear in. Datasets such as image-verification-corpus, Image Manipulation, BUZZFEEDNEWS, and BUZZFACE can be utilized for fake image detection, but contain small sample sizesBIBREF12, BIBREF13, BIBREF14. It can be seen from the table that compared to other existing datasets, Fakeddit contains a large quantity of data, while also annotating for three different types of classification labels (2-way, 3-way, and 5-way) and comparing both text and image data."
],
[
"Many fake news datasets are crowdsourced or handpicked from a select few sources that are narrow in size, modality, and/or diversity. In order to expand and evolve fake news research, researchers need to have access to a dataset that exceed these current dataset limitations. Thus, we propose Fakeddit, a novel dataset consisting of a large quantity of text+image samples coming from large diverse sources.",
"We sourced our dataset from Reddit, a social news and discussion website where users can post submissions on various subreddits. Each subreddit has its own theme like `nottheonion', where people post seemingly false stories that are surprisingly true. Active Reddit users are able to upvote, downvote, and comment on the submission.",
"Submissions were collected with the pushshift.io API. Each subreddit has moderators that ensure submissions pertain to the subreddit theme and remove posts that violate any rules, indirectly helping us obtain reliable data. To further ensure that our data is credible, we filtered out any submissions that had a score of less than 1. Fakeddit consists of 825,100 total submissions from 21 different subreddits. We gathered the submission title and image, comments made by users who engaged with the submission, as well as other submission metadata including the score, the username of the author, subreddit source, sourced domain, number of comments, and up-vote to down-vote ratio. 63% of the samples contains both text and images, while the rest contain only text. For our experiments, we utilize these multimodal samples. The samples span over many years and are posted on highly active and popular pages by tens of thousands of diverse individual users from across the world. Because of the variety of the chosen subreddits, our data also varies in its content, ranging from political news stories to simple everyday posts by Reddit users.",
"We provide three labels for each sample, allowing us to train for 2-way, 3-way, and 5-way classification. Having this hierarchy of labels will enable researchers to train for fake news detection at a high level or a more fine-grained one. The 2-way classification determines whether a sample is fake or true. The 3-way classification determines whether a sample is completely true, the sample is fake news with true text (text that is true in the real world), or the sample is fake news with false text. Our final 5-way classification was created to categorize different types of fake news rather than just doing a simple binary or trinary classification. This can help in pinpointing the degree and variation of fake news for applications that require this type of fine-grained detection. The first label is true and the other four are defined within the seven types of fake news BIBREF3. We provide examples from each class for 5-way classification in Figure SECREF3. The 5-way classification labels are explained below:",
"True: True content is accurate in accordance with fact. Eight of the subreddits fall into this category, such as usnews and mildlyinteresting. The former consists of posts from various news sites. The latter encompasses real photos with accurate captions. The other subreddits include photoshopbattles, nottheonion, neutralnews, pic, usanews, and upliftingnews.",
"Satire/Parody: This category consists of content that spins true contemporary content with a satirical tone or information that makes it false. One of the four subreddits that make up this label is theonion, with headlines such as “Man Lowers Carbon Footprint By Bringing Reusable Bags Every Time He Buys Gas\". Other satirical subreddits are fakealbumcovers, satire, and waterfordwhispersnews.",
"Misleading Content: This category consists of information that is intentionally manipulated to fool the audience. Our dataset contains three subreddits in this category: propagandaposters, fakefacts, and savedyouaclick.",
"Imposter Content: This category contains the subredditsimulator subreddit, which contains bot-generated content and is trained on a large number of other subreddits. It also includes subsimulatorgpt2.",
"False Connection: Submission images in this category do not accurately support their text descriptions. We have four subreddits with this label, containing posts of images with captions that do not relate to the true meaning of the image. These include misleadingthumbnails, confusing_perspective, pareidolia, and fakehistoryporn."
],
[
"Multiple methods were employed for text and image feature extraction. We used InferSent and BERT to generate text embeddings for the title of the Reddit submissions BIBREF15, BIBREF16. VGG16, EfficientNet, and ResNet50 were utilized to extract the features of the Reddit submission thumbnails BIBREF17, BIBREF18, BIBREF19.",
"We used the InferSent model because it performs very well as a universal sentence embeddings generator. For this model, we loaded a vocabulary of 1 million of the most common words in English and used fastText as opposed to ELMO embeddings because fastText can perform relatively well for rare words and words that do not appear in the vocabulary BIBREF20, BIBREF21. We obtained encoded sentence features of length 4096 for each submission title using InferSent.",
"The BERT model achieves state-of-the-art results on many classification tasks, including Q&A and named entity recognition. To obtain fixed-length BERT embedding vectors, we used the bert-as-service tool, which maps variable-length text/sentences into a 768 element array for each Reddit submission title BIBREF22. For our experiments, we utilized the pretrained BERT-Large, Uncased model.",
"We utilized VGG16, ResNet50, and EfficientNet models for encoding images. VGG16 and ResNet50 are widely used by many researchers, while EfficientNet is a relatively newer model. For EfficientNet, we used the smallest variation: B0. For all three image models, we preloaded weights of models trained on ImageNet and included the top layer and used its penultimate layer for feature extraction.",
"For our experiments, we excluded submissions that did not have an image associated with them and solely used submission image and title data. We performed 2-way, 3-way, and 5-way classification for each of the three types of inputs: image only, text only, and multimodal (text and image).",
"Before training, we performed preprocessing on the images and text. We constrained sizes of the images to 224x224. From the text, we removed all punctuation, numbers, and revealing words such as “PsBattle” that automatically reveal the subreddit source. For the savedyouaclick subreddit, we removed text following the “” character and classified it as misleading content.",
"When combining the features in multimodal classification, we first condensed the features into 256-element vectors through a trainable dense layer and then merged them through four different methods: add, concatenate, maximum, average. These features were then passed through a fully connected softmax predictor."
],
[
"The results are shown in Tables TABREF17 and SECREF3. We found that the multimodal features performed the best, followed by text-only, and image-only in all instances. Thus, having both image and text improves fake news detection. For image and multimodal classification, ResNet50 performed the best followed by VGG16 and EfficientNet. In addition, BERT generally achieved better results than InferSent for multimodal classification. However, for text-only classification InferSent outperformed BERT. The “maximum” method to merge image and text features yielded the highest accuracy, followed by average, concatenate, and add. Overall, the multimodal model that combined BERT text features and ResNet50 image features through the maximum method performed most optimally."
],
[
"In this paper, we presented a novel dataset for fake news research, Fakeddit. Compared to previous datasets, Fakeddit provides a large quantity of text+image samples with multiple labels for various levels of fine-grained classification. We created detection models that incorporate both modalities of data and conducted experiments, showing that there is still room for improvement in fake news detection. Although we do not utilize submission metadata and comments made by users on the submissions, we anticipate that these features will be useful for further research. We hope that our dataset can be used to advance efforts to combat the ever growing rampant spread of misinformation."
],
[
"We would like to acknowledge Facebook for the Online Safety Benchmark Award. The authors are solely responsible for the contents of the paper, and the opinions expressed in this publication do not reflect those of the funding agencies."
]
],
"section_name": [
"Introduction",
"Related Work",
"Fakeddit",
"Experiments ::: Fake News Detection",
"Experiments ::: Results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"074e161fc2c940b784cc0d49950dde8b3fc6fda4",
"facec1a97e172ede04ad03a06a18019391449b60"
],
"answer": [
{
"evidence": [
"We evaluate our dataset through text, image, and text+image modes with a neural network architecture that integrates both the image and text data. We run experiments for several types of models, providing a comprehensive overview of classification results."
],
"extractive_spans": [],
"free_form_answer": "fake news detection through text, image and text+image modes",
"highlighted_evidence": [
"We evaluate our dataset through text, image, and text+image modes with a neural network architecture that integrates both the image and text data. We run experiments for several types of models, providing a comprehensive overview of classification results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For our experiments, we excluded submissions that did not have an image associated with them and solely used submission image and title data. We performed 2-way, 3-way, and 5-way classification for each of the three types of inputs: image only, text only, and multimodal (text and image)."
],
"extractive_spans": [],
"free_form_answer": "They experiment on 3 types of classification tasks with different inputs:\n2-way: True/False\n3-way: True/False news with text true in real world/False news with false text\n5-way: True/Parody/Missleading/Imposter/False Connection",
"highlighted_evidence": [
"We performed 2-way, 3-way, and 5-way classification for each of the three types of inputs: image only, text only, and multimodal (text and image)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8f7c4fce0ce80213a9ecce41e85246ccd3243f77",
"a6532fd8f196de5200ec220ac6bcefe363804d54"
],
"answer": [
{
"evidence": [
"Satire/Parody: This category consists of content that spins true contemporary content with a satirical tone or information that makes it false. One of the four subreddits that make up this label is theonion, with headlines such as “Man Lowers Carbon Footprint By Bringing Reusable Bags Every Time He Buys Gas\". Other satirical subreddits are fakealbumcovers, satire, and waterfordwhispersnews.",
"Misleading Content: This category consists of information that is intentionally manipulated to fool the audience. Our dataset contains three subreddits in this category: propagandaposters, fakefacts, and savedyouaclick.",
"Imposter Content: This category contains the subredditsimulator subreddit, which contains bot-generated content and is trained on a large number of other subreddits. It also includes subsimulatorgpt2.",
"False Connection: Submission images in this category do not accurately support their text descriptions. We have four subreddits with this label, containing posts of images with captions that do not relate to the true meaning of the image. These include misleadingthumbnails, confusing_perspective, pareidolia, and fakehistoryporn."
],
"extractive_spans": [
"Satire/Parody",
"Misleading Content",
"Imposter Content",
"False Connection"
],
"free_form_answer": "",
"highlighted_evidence": [
"Satire/Parody: This category consists of content that spins true contemporary content with a satirical tone or information that makes it false.",
"Misleading Content: This category consists of information that is intentionally manipulated to fool the audience.",
"Imposter Content: This category contains the subredditsimulator subreddit, which contains bot-generated content and is trained on a large number of other subreddits.",
"False Connection: Submission images in this category do not accurately support their text descriptions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We provide three labels for each sample, allowing us to train for 2-way, 3-way, and 5-way classification. Having this hierarchy of labels will enable researchers to train for fake news detection at a high level or a more fine-grained one. The 2-way classification determines whether a sample is fake or true. The 3-way classification determines whether a sample is completely true, the sample is fake news with true text (text that is true in the real world), or the sample is fake news with false text. Our final 5-way classification was created to categorize different types of fake news rather than just doing a simple binary or trinary classification. This can help in pinpointing the degree and variation of fake news for applications that require this type of fine-grained detection. The first label is true and the other four are defined within the seven types of fake news BIBREF3. We provide examples from each class for 5-way classification in Figure SECREF3. The 5-way classification labels are explained below:",
"True: True content is accurate in accordance with fact. Eight of the subreddits fall into this category, such as usnews and mildlyinteresting. The former consists of posts from various news sites. The latter encompasses real photos with accurate captions. The other subreddits include photoshopbattles, nottheonion, neutralnews, pic, usanews, and upliftingnews.",
"Satire/Parody: This category consists of content that spins true contemporary content with a satirical tone or information that makes it false. One of the four subreddits that make up this label is theonion, with headlines such as “Man Lowers Carbon Footprint By Bringing Reusable Bags Every Time He Buys Gas\". Other satirical subreddits are fakealbumcovers, satire, and waterfordwhispersnews.",
"Misleading Content: This category consists of information that is intentionally manipulated to fool the audience. Our dataset contains three subreddits in this category: propagandaposters, fakefacts, and savedyouaclick.",
"Imposter Content: This category contains the subredditsimulator subreddit, which contains bot-generated content and is trained on a large number of other subreddits. It also includes subsimulatorgpt2.",
"False Connection: Submission images in this category do not accurately support their text descriptions. We have four subreddits with this label, containing posts of images with captions that do not relate to the true meaning of the image. These include misleadingthumbnails, confusing_perspective, pareidolia, and fakehistoryporn."
],
"extractive_spans": [
"Satire/Parody",
"Misleading Content",
"Imposter Content",
"False Connection"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our final 5-way classification was created to categorize different types of fake news rather than just doing a simple binary or trinary classification.",
"The 5-way classification labels are explained below:\n\nTrue: True content is accurate in accordance with fact. Eight of the subreddits fall into this category, such as usnews and mildlyinteresting. The former consists of posts from various news sites. The latter encompasses real photos with accurate captions. The other subreddits include photoshopbattles, nottheonion, neutralnews, pic, usanews, and upliftingnews.\n\nSatire/Parody: This category consists of content that spins true contemporary content with a satirical tone or information that makes it false. One of the four subreddits that make up this label is theonion, with headlines such as “Man Lowers Carbon Footprint By Bringing Reusable Bags Every Time He Buys Gas\". Other satirical subreddits are fakealbumcovers, satire, and waterfordwhispersnews.\n\nMisleading Content: This category consists of information that is intentionally manipulated to fool the audience. Our dataset contains three subreddits in this category: propagandaposters, fakefacts, and savedyouaclick.\n\nImposter Content: This category contains the subredditsimulator subreddit, which contains bot-generated content and is trained on a large number of other subreddits. It also includes subsimulatorgpt2.\n\nFalse Connection: Submission images in this category do not accurately support their text descriptions. We have four subreddits with this label, containing posts of images with captions that do not relate to the true meaning of the image. These include misleadingthumbnails, confusing_perspective, pareidolia, and fakehistoryporn."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"What classification tasks do they experiment on?",
"What categories of fake news are in the dataset?"
],
"question_id": [
"552b1c813f25bf39ace6cd5eefa56f4e4dd70c84",
"1100e442e00c9914538a32aca7af994ce42e1b66"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Comparison of various fake news detection datasets. IA: Individual assessments.",
"Figure 1: Dataset examples with 5-way classification labels.",
"Table 2: Results on fake news detection for 2, 3, and 5-way classification with combination method of maximum.",
"Table 3: Results on different multi-modal combinations for BERT + ResNet50"
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What classification tasks do they experiment on?"
] | [
[
"1911.03854-Experiments ::: Fake News Detection-4",
"1911.03854-Introduction-5"
]
] | [
"They experiment on 3 types of classification tasks with different inputs:\n2-way: True/False\n3-way: True/False news with text true in real world/False news with false text\n5-way: True/Parody/Missleading/Imposter/False Connection"
] | 265 |
1708.03699 | Improved Abusive Comment Moderation with User Embeddings | Experimenting with a dataset of approximately 1.6M user comments from a Greek news sports portal, we explore how a state of the art RNN-based moderation method can be improved by adding user embeddings, user type embeddings, user biases, or user type biases. We observe improvements in all cases, with user embeddings leading to the biggest performance gains. | {
"paragraphs": [
[
"News portals often allow their readers to comment on articles, in order to get feedback, engage their readers, and build customer loyalty. User comments, however, can also be abusive (e.g., bullying, profanity, hate speech), damaging the reputation of news portals, making them liable to fines (e.g., when hosting comments encouraging illegal actions), and putting off readers. Large news portals often employ moderators, who are frequently overwhelmed by the volume and abusiveness of comments. Readers are disappointed when non-abusive comments do not appear quickly online because of moderation delays. Smaller news portals may be unable to employ moderators, and some are forced to shut down their comments.",
"In previous work BIBREF0 , we introduced a new dataset of approx. 1.6M manually moderated user comments from a Greek sports news portal, called Gazzetta, which we made publicly available. Experimenting on that dataset and the datasets of Wulczyn et al. Wulczyn2017, which contain moderated English Wikipedia comments, we showed that a method based on a Recurrent Neural Network (rnn) outperforms detox BIBREF1 , the previous state of the art in automatic user content moderation. Our previous work, however, considered only the texts of the comments, ignoring user-specific information (e.g., number of previously accepted or rejected comments of each user). Here we add user embeddings or user type embeddings to our rnn-based method, i.e., dense vectors that represent individual users or user types, similarly to word embeddings that represent words BIBREF2 , BIBREF3 . Experiments on Gazzetta comments show that both user embeddings and user type embeddings improve the performance of our rnn-based method, with user embeddings helping more. User-specific or user-type-specific scalar biases also help to a lesser extent."
],
[
"We first discuss the dataset we used, to help acquaint the reader with the problem. The dataset contains Greek comments from Gazzetta BIBREF0 . There are approximately 1.45M training comments (covering Jan. 1, 2015 to Oct. 6, 2016); we call them g-train (Table TABREF5 ). An additional set of 60,900 comments (Oct. 7 to Nov. 11, 2016) was split to development set (g-dev, 29,700 comments) and test set (g-test, 29,700). Each comment has a gold label (`accept', `reject'). The user id of the author of each comment is also available, but user id s were not used in our previous work.",
"When experimenting with user type embeddings or biases, we group the users into the following types. INLINEFORM0 is the number of training comments posted by user (id) INLINEFORM1 . INLINEFORM2 is the ratio of training comments posted by INLINEFORM3 that were rejected.",
"Red: Users with INLINEFORM0 and INLINEFORM1 .",
"Yellow: INLINEFORM0 and INLINEFORM1 .",
"Green: INLINEFORM0 and INLINEFORM1 .",
"Unknown: Users with INLINEFORM0 .",
"Table TABREF6 shows the number of users per type."
],
[
"rnn: This is the rnn-based method of our previous work BIBREF0 . It is a chain of gru cells BIBREF4 that transforms the tokens INLINEFORM0 of each comment to the hidden states INLINEFORM1 ( INLINEFORM2 ). Once INLINEFORM3 has been computed, a logistic regression (lr) layer estimates the probability that comment INLINEFORM4 should be rejected: DISPLAYFORM0 ",
" INLINEFORM0 is the sigmoid function, INLINEFORM1 , INLINEFORM2 .",
"uernn: This is the rnn-based method with user embeddings added. Each user INLINEFORM0 of the training set with INLINEFORM1 is mapped to a user-specific embedding INLINEFORM2 . Users with INLINEFORM3 are mapped to a single `unknown' user embedding. The lr layer is modified as follows; INLINEFORM4 is the embedding of the author of INLINEFORM5 ; and INLINEFORM6 . DISPLAYFORM0 ",
"ternn: This is the rnn-based method with user type embeddings added. Each user type INLINEFORM0 is mapped to a user type embedding INLINEFORM1 . The lr layer is modified as follows, where INLINEFORM2 is the embedding of the type of the author of INLINEFORM3 . DISPLAYFORM0 ",
"ubrnn: This is the rnn-based method with user biases added. Each user INLINEFORM0 of the training set with INLINEFORM1 is mapped to a user-specific bias INLINEFORM2 . Users with INLINEFORM3 are mapped to a single `unknown' user bias. The lr layer is modified as follows, where INLINEFORM4 is the bias of the author of INLINEFORM5 . DISPLAYFORM0 ",
"We expected ubrnn to learn higher (or lower) INLINEFORM0 biases for users whose posts were frequently rejected (accepted) in the training data, biasing the system towards rejecting (accepting) their posts.",
"tbrnn: This is the rnn-based method with user type biases. Each user type INLINEFORM0 is mapped to a user type bias INLINEFORM1 . The lr layer is modified as follows; INLINEFORM2 is the bias of the type of the author. DISPLAYFORM0 ",
"We expected tbrnn to learn a higher INLINEFORM0 for the red user type (frequently rejected), and a lower INLINEFORM1 for the green user type (frequently accepted), with the biases of the other two types in between.",
"In all methods above, we use 300-dimensional word embeddings, user and user type embeddings with INLINEFORM0 dimensions, and INLINEFORM1 hidden units in the gru cells, as in our previous experiments BIBREF0 , where we tuned all hyper-parameters on 2% held-out training comments. Early stopping evaluates on the same held-out subset. User and user type embeddings are randomly initialized and updated by backpropagation. Word embeddings are initialized to the word2vec embeddings of our previous work BIBREF0 , which were pretrained on 5.2M Gazzetta comments. Out of vocabulary words, meaning words not encountered or encountered only once in the training set and/or words with no initial embeddings, are mapped (during both training and testing) to a single randomly initialized word embedding, updated by backpropagation. We use Glorot initialization BIBREF5 for other parameters, cross-entropy loss, and Adam BIBREF6 .",
"ubase: For a comment INLINEFORM0 authored by user INLINEFORM1 , this baseline returns the rejection rate INLINEFORM2 of the author's training comments, if there are INLINEFORM3 training comments of INLINEFORM4 , and 0.5 otherwise. INLINEFORM5 ",
"tbase: This baseline returns the following probabilities, considering the user type INLINEFORM0 of the author. INLINEFORM1 "
],
[
"Table TABREF15 shows the auc scores (area under roc curve) of the methods considered. Using auc allows us to compare directly to the results of our previous work BIBREF0 and the work of Wulczyn et al. Wulczyn2017. Also, auc considers performance at multiple classification thresholds INLINEFORM0 (rejecting comment INLINEFORM1 when INLINEFORM2 , for different INLINEFORM3 values), which gives a more complete picture compared to reporting precision, recall, or F-scores for a particular INLINEFORM4 only. Accuracy is not an appropriate measure here, because of class imbalance (Table TABREF5 ). For methods that involve random initializations (all but the baselines), the results are averaged over three repetitions; we also report the standard error across the repetitions.",
"User-specific information always improves our original rnn-based method (Table TABREF15 ), but the best results are obtained by adding user embeddings (uernn). Figure FIGREF16 visualizes the user embeddings learned by uernn. The two dimensions of Fig. FIGREF16 correspond to the two principal components of the user embeddings, obtained via pca.The colors and numeric labels reflect the rejection rates INLINEFORM0 of the corresponding users. Moving from left to right in Fig. FIGREF16 , the rejection rate increases, indicating that the user embeddings of uernn capture mostly the rejection rate INLINEFORM1 . This rate (a single scalar value per user) can also be captured by the simpler user-specific biases of ubrnn, which explains why ubrnn also performs well (second best results in Table TABREF15 ). Nevertheless, uernn performs better than ubrnn, suggesting that user embeddings capture more information than just a user-specific rejection rate bias.",
"Three of the user types (Red, Yellow, Green) in effect also measure INLINEFORM0 , but in discretized form (three bins), which also explains why user type embeddings (ternn) also perform well (third best method). The performance of tbrnn is close to that of ternn, suggesting again that most of the information captured by user type embeddings can also be captured by simpler scalar user-type-specific biases. The user type biases INLINEFORM1 learned by tbrnn are shown in Table TABREF18 . The bias of the Red type is the largest, the bias of the Green type is the smallest, and the biases of the Unknown and Yellow types are in between, as expected (Section SECREF3 ). The same observations hold for the average user-specific biases INLINEFORM2 learned by ubrnn (Table TABREF18 ).",
"Overall, Table TABREF15 indicates that user-specific information (uernn, ubrnn) is better than user-type information (ternn, tbrnn), and that embeddings (uernn, ternn) are better than the scalar biases (ubrnn, tbrnn), though the differences are small. All the rnn-based methods outperform the two baselines (ubase, tbase), which do not consider the texts of the comments.",
"Let us provide a couple of examples, to illustrate the role of user-specific information. We encountered a comment saying just “Ooooh, down to Pireaus...” (translated from Greek), which the moderator had rejected, because it is the beginning of an abusive slogan. The rejection probability of rnn was only 0.34, presumably because there are no clearly abusive expressions in the comment, but the rejection probability of uernn was 0.72, because the author had a very high rejection rate. On the other hand, another comment said “Indeed, I know nothing about the filth of Greek soccer.” (translated, apparently not a sarcastic comment). The original rnn method marginally rejected the comment (rejection probability 0.57), presumably because of the `filth' (comments talking about the filth of some sport or championship are often rejected), but uernn gave it a very low rejection probability (0.15), because the author of the comment had a very low rejection rate."
],
[
"In previous work BIBREF0 , we showed that our rnn-based method outperforms detox BIBREF1 , the previous state of the art in user content moderation. detox uses character or word INLINEFORM0 -gram features, no user-specific information, and an lr or mlp classifier. Other related work on abusive content moderation was reviewed extensively in our previous work BIBREF0 . Here we focus on previous work that considered user-specific features and user embeddings.",
"Dadvar et al. Dadvar2013 detect cyberbullying in YouTube comments, using an svm and features examining the content of each comment (e.g., second person pronouns followed by profane words, common bullying words), but also the profile and history of the author of the comment (e.g., age, frequency of profane words in past posts). Waseem et al. Waseem2016 detect hate speech tweets. Their best method is an lr classifier, with character INLINEFORM0 -grams and a feature indicating the gender of the author; adding the location of the author did not help.",
"Cheng et al. Cheng2015 predict which users will be banned from on-line communities. Their best system uses a Random Forest or lr classifier, with features examining the average readability and sentiment of each user's past posts, the past activity of each user (e.g., number of posts daily, proportion of posts that are replies), and the reactions of the community to the past actions of each user (e.g., up-votes, number of posts rejected). Lee et al. Lee2014 and Napoles et al. Napoles2017b include similar user-specific features in classifiers intended to detect high quality on-line discussions.",
"Amir et al. Amir2016 detect sarcasm in tweets. Their best system uses a word-based Convolutional Neural Network (cnn). The feature vector produced by the cnn (representing the content of the tweet) is concatenated with the user embedding of the author, and passed on to an mlp that classifies the tweet as sarcastic or not. This method outperforms a previous state of the art sarcasm detection method BIBREF8 that relies on an lr classifier with hand-crafted content and user-specific features. We use an rnn instead of a cnn, and we feed the comment and user embeddings to a simpler lr layer (Eq. EQREF10 ), instead of an mlp. Amir et al. discard unknown users, unlike our experiments, and consider only sarcasm, whereas moderation also involves profanity, hate speech, bullying, threats etc.",
"User embeddings have also been used in: conversational agents BIBREF9 ; sentiment analysis BIBREF10 ; retweet prediction BIBREF11 ; predicting which topics a user is likely to tweet about, the accounts a user may want to follow, and the age, gender, political affiliation of Twitter users BIBREF12 .",
"Our previous work BIBREF0 also discussed how machine learning can be used in semi-automatic moderation, by letting moderators focus on `difficult' comments and automatically handling comments that are easier to accept or reject. In more recent work BIBREF13 we also explored how an attention mechanism can be used to highlight possibly abusive words or phrases when showing `difficult' comments to moderators."
],
[
"Experimenting with a dataset of approx. 1.6M user comments from a Greek sports news portal, we explored how a state of the art rnn-based moderation method can be improved by adding user embeddings, user type embeddings, user biases, or user type biases. We observed improvements in all cases, but user embeddings were the best.",
"We plan to compare uernn to cnn-based methods that employ user embeddings BIBREF14 , after replacing the lr layer of uernn by an mlp to allow non-linear combinations of comment and user embeddings."
],
[
"This work was funded by Google's Digital News Initiative (project ml2p, contract 362826). We are grateful to Gazzetta for the data they provided. We also thank Gazzetta's moderators for their feedback, insights, and advice."
]
],
"section_name": [
"Introduction",
"Dataset",
"Methods",
"Results and Discussion",
"Related work",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0753d6ef3406703ffa9474e12d4e481996580f1a",
"75395312b1b17430d3813a726c4a04b1bff51233"
],
"answer": [
{
"evidence": [
"User-specific information always improves our original rnn-based method (Table TABREF15 ), but the best results are obtained by adding user embeddings (uernn). Figure FIGREF16 visualizes the user embeddings learned by uernn. The two dimensions of Fig. FIGREF16 correspond to the two principal components of the user embeddings, obtained via pca.The colors and numeric labels reflect the rejection rates INLINEFORM0 of the corresponding users. Moving from left to right in Fig. FIGREF16 , the rejection rate increases, indicating that the user embeddings of uernn capture mostly the rejection rate INLINEFORM1 . This rate (a single scalar value per user) can also be captured by the simpler user-specific biases of ubrnn, which explains why ubrnn also performs well (second best results in Table TABREF15 ). Nevertheless, uernn performs better than ubrnn, suggesting that user embeddings capture more information than just a user-specific rejection rate bias.",
"FLOAT SELECTED: Table 3: AUC scores. Standard error in brackets."
],
"extractive_spans": [],
"free_form_answer": "On test set RNN that uses user embedding has AUC of 80.53 compared to base RNN 79.24.",
"highlighted_evidence": [
"User-specific information always improves our original rnn-based method (Table TABREF15 ), but the best results are obtained by adding user embeddings (uernn).",
"FLOAT SELECTED: Table 3: AUC scores. Standard error in brackets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: AUC scores. Standard error in brackets."
],
"extractive_spans": [],
"free_form_answer": "16.89 points on G-test from the baseline tBase",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: AUC scores. Standard error in brackets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"How much gain in performance was obtained with user embeddings?"
],
"question_id": [
"82b93ecd2397e417e1e80f93b7cf49c7bd9aeec3"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Table 1: Comment statistics of the dataset used.",
"Table 2: User statistics of the dataset used.",
"Table 3: AUC scores. Standard error in brackets.",
"Figure 1: User embeddings learned by ueRNN (2 principal components). Color represents the rejection rate R(u) of the user’s training comments.",
"Table 4: Biases learned and standard error."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"3-Figure1-1.png",
"4-Table4-1.png"
]
} | [
"How much gain in performance was obtained with user embeddings?"
] | [
[
"1708.03699-3-Table3-1.png",
"1708.03699-Results and Discussion-1"
]
] | [
"16.89 points on G-test from the baseline tBase"
] | 266 |
1608.01972 | Bridging the Gap: Incorporating a Semantic Similarity Measure for Effectively Mapping PubMed Queries to Documents | The main approach of traditional information retrieval (IR) is to examine how many words from a query appear in a document. A drawback of this approach, however, is that it may fail to detect relevant documents where no or only few words from a query are found. The semantic analysis methods such as LSA (latent semantic analysis) and LDA (latent Dirichlet allocation) have been proposed to address the issue, but their performance is not superior compared to common IR approaches. Here we present a query-document similarity measure motivated by the Word Mover's Distance. Unlike other similarity measures, the proposed method relies on neural word embeddings to calculate the distance between words. Our method is efficient and straightforward to implement. The experimental results on TREC and PubMed show that our approach provides significantly better performance than BM25. We also discuss the pros and cons of our approach and show that there is a synergy effect when the word embedding measure is combined with the BM25 function. | {
"paragraphs": [
[
"In information retrieval (IR), queries and documents are typically represented by term vectors where each term is a content word and weighted by tf-idf, i.e. the product of the term frequency and the inverse document frequency, or other weighting schemes BIBREF0 . The similarity of a query and a document is then determined as a dot product or cosine similarity. Although this works reasonably, the traditional IR scheme often fails to find relevant documents when synonymous or polysemous words are used in a dataset, e.g. a document including only “neoplasm\" cannot be found when the word “cancer\" is used in a query. One solution of this problem is to use query expansion BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 or dictionaries, but these alternatives still depend on the same philosophy, i.e. queries and documents should share exactly the same words.",
"While the term vector model computes similarities in a sparse and high-dimensional space, the semantic analysis methods such as latent semantic analysis (LSA) BIBREF5 , BIBREF6 and latent Dirichlet allocation (LDA) BIBREF7 learn dense vector representations in a low-dimensional space. These methods choose a vector embedding for each term and estimate a similarity between terms by taking an inner product of their corresponding embeddings BIBREF8 . Since the similarity is calculated in a latent (semantic) space based on context, the semantic analysis approaches do not require having common words between a query and documents. However, it has been shown that LSA and LDA methods do not produce superior results in various IR tasks BIBREF9 , BIBREF10 , BIBREF11 and the classic ranking method, BM25 BIBREF12 , usually outperforms those methods in document ranking BIBREF13 , BIBREF14 .",
"Neural word embedding BIBREF15 , BIBREF16 is similar to the semantic analysis methods described above. It learns low-dimensional word vectors from text, but while LSA and LDA utilize co-occurrences of words, neural word embedding learns word vectors to predict context words BIBREF10 . Moreover, training of semantic vectors is derived from neural networks. Both co-occurrence and neural word embedding approaches have been used for lexical semantic tasks such as semantic relatedness (e.g. king and queen), synonym detection (e.g. cancer and carcinoma) and concept categorization (e.g. banana and pineapple belong to fruits) BIBREF10 , BIBREF17 . But, Baroni et al. Baroni2014 showed that neural word embedding approaches generally performed better on such tasks with less effort required for parameter optimization. The neural word embedding models have also gained popularity in recent years due to their high performance in NLP tasks BIBREF18 .",
"Here we present a query-document similarity measure using a neural word embedding approach. This work is particularly motivated by the Word Mover's Distance BIBREF19 . Unlike the common similarity measure taking query/document centroids of word embeddings, the proposed method evaluates a distance between individual words from a query and a document. Our first experiment was performed on the TREC 2006 and 2007 Genomics benchmark sets BIBREF20 , BIBREF21 , and the experimental results showed that our approach was better than BM25 ranking. This was solely based on matching queries and documents by the semantic measure and no other feature was used for ranking documents.",
"In general, conventional ranking models (e.g. BM25) rely on a manually designed ranking function and require heuristic optimization for parameters BIBREF22 , BIBREF23 . In the age of information explosion, this one-size-fits-all solution is no longer adequate. For instance, it is well known that links to a web page are an important source of information in web document search BIBREF24 , hence using the link information as well as the relevance between a query and a document is crucial for better ranking. In this regard, learning to rank BIBREF22 has drawn much attention as a scheme to learn how to combine diverse features. Given feature vectors of documents and their relevance levels, a learning to rank approach learns an optimal way of weighting and combining multiple features.",
"We argue that the single scores (or features) produced by BM25 and our proposed semantic measure complement each other, thus merging these two has a synergistic effect. To confirm this, we measured the impact on document ranking by combining BM25 and semantic scores using the learning to rank approach, LamdaMART BIBREF25 , BIBREF26 . Trained on PubMed user queries and their click-through data, we evaluated the search performance based on the most highly ranked 20 documents. As a result, we found that using our semantic measure further improved the performance of BM25.",
"Taken together, we make the following important contributions in this work. First, to the best of our knowledge, this work represents the first investigation of query-document similarity for information retrieval using the recently proposed Word Mover's Distance. Second, we modify the original Word Mover's Distance algorithm so that it is computationally less expensive and thus more practical and scalable for real-world search scenarios (e.g. biomedical literature search). Third, we measure the actual impact of neural word embeddings in PubMed by utilizing user queries and relevance information derived from click-through data. Finally, on TREC and PubMed datasets, our proposed method achieves stronger performance than BM25."
],
[
"A common approach to computing similarity between texts (e.g. phrases, sentences or documents) is to take a centroid of word embeddings, and evaluate an inner product or cosine similarity between centroids BIBREF14 , BIBREF27 . This has found use in classification and clustering because they seek an overall topic of each document. However, taking a simple centroid is not a good approximator for calculating a distance between a query and a document BIBREF19 . This is mostly because queries tend to be short and finding the actual query words in documents is feasible and more accurate than comparing lossy centroids. Consistent with this, our approach here is to measure the distance between individual words, not the average distance between a query and a document."
],
[
"Our work is based on the Word Mover's Distance between text documents BIBREF19 , which calculates the minimum cumulative distance that words from a document need to travel to match words from a second document. In this subsection, we outline the original Word Mover's Distance algorithm, and our adapted model is described in Section 2.2.",
"First, following Kusner et al. Kusner2015, documents are represented by normalized bag-of-words (BOW) vectors, i.e. if a word INLINEFORM0 appears INLINEFORM1 times in a document, the weight is DISPLAYFORM0 ",
"where INLINEFORM0 is number of words in the document. The higher the weight, the more important the word. They assume a word embedding so that each word INLINEFORM1 has an associated vector INLINEFORM2 . The dissimilarity INLINEFORM3 between INLINEFORM4 and INLINEFORM5 is then calculated by DISPLAYFORM0 ",
"The Word Mover's Distance makes use of word importance and the relatedness of words as we now describe.",
"Let INLINEFORM0 and INLINEFORM1 be BOW representations of two documents INLINEFORM2 and INLINEFORM3 . Let INLINEFORM4 be a flow matrix, where INLINEFORM5 denotes how much it costs to travel from INLINEFORM6 in INLINEFORM7 to INLINEFORM8 in INLINEFORM9 , and INLINEFORM10 is the number of unique words appearing in INLINEFORM11 and/or INLINEFORM12 . To entirely transform INLINEFORM13 to INLINEFORM14 , we ensure that the entire outgoing flow from INLINEFORM15 equals INLINEFORM16 and the incoming flow to INLINEFORM17 equals INLINEFORM18 . The Word Mover's Distance between INLINEFORM19 and INLINEFORM20 is then defined as the minimum cumulative cost required to move all words from INLINEFORM21 to INLINEFORM22 or vice versa, i.e. DISPLAYFORM0 ",
"The solution is attained by finding INLINEFORM0 that minimizes the expression in Eq. ( EQREF5 ). Kusner et al. Kusner2015 applied this to obtain nearest neighbors for document classification, i.e. k-NN classification and it produced outstanding performance among other state-of-the-art approaches. What we have just described is the approach given in Kusner et al. We will modify the word weights and the measure of the relatedness of words to better suit our application."
],
[
"While the prior work gives a hint that the Word Mover's Distance is a reasonable choice for evaluating a similarity between documents, it is uncertain how the same measure could be used for searching documents to satisfy a query. First, it is expensive to compute the Word Mover's Distance. The time complexity of solving the distance problem is INLINEFORM0 BIBREF28 . Second, the semantic space of queries is not the same as those of documents. A query consists of a small number of words in general, hence words in a query tend to be more ambiguous because of the restricted context. On the contrary, a text document is longer and more informational. Having this in mind, we realize that ideally two distinctive components could be employed for query-document search: 1) mapping queries to documents using a word embedding model trained on a document set and 2) mapping documents to queries using a word embedding model obtained from a query set. In this work, however, we aim to address the former, and the mapping of documents to queries remains as future work.",
"For our purpose, we will change the word weight INLINEFORM0 to incorporate inverse document frequency ( INLINEFORM1 ), i.e. DISPLAYFORM0 ",
"where INLINEFORM0 . INLINEFORM1 is the size of a document set and INLINEFORM2 is the number of documents that include the INLINEFORM3 th term. The rationale behind this is to weight words in such a way that common terms are given less importance. It is the idf factor normally used in tf-idf and BM25 BIBREF29 , BIBREF30 . In addition, our word embedding is a neural word embedding trained on the 25 million PubMed titles and abstracts.",
"Let INLINEFORM0 and INLINEFORM1 be BOW representations of a query INLINEFORM2 and a document INLINEFORM3 . INLINEFORM4 and INLINEFORM5 in Section 2.1 are now replaced by INLINEFORM6 and INLINEFORM7 , respectively. Since we want to have a higher score for documents relevant to INLINEFORM8 , INLINEFORM9 is redefined as a cosine similarity, i.e. DISPLAYFORM0 ",
"In addition, the problem we try to solve is the flow INLINEFORM0 . Hence, Eq. ( EQREF5 ) is rewritten as follows. DISPLAYFORM0 ",
"where INLINEFORM0 represents the word INLINEFORM1 in INLINEFORM2 . INLINEFORM3 in Eq. ( EQREF7 ) is unknown for queries, therefore we compute INLINEFORM4 based on the document collection. The optimal solution of the expression in Eq. ( EQREF9 ) is to map each word in INLINEFORM5 to the most similar word in INLINEFORM6 based on word embeddings. The time complexity for getting the optimal solution is INLINEFORM7 , where INLINEFORM8 is the number of unique query words and INLINEFORM9 is the number of unique document words. In general, INLINEFORM10 and evaluating the similarity between a query and a document can be implemented in parallel computation. Thus, the document ranking process can be quite efficient."
],
[
"In our study, we use learning to rank to merge two distinctive features, BM25 scores and our semantic measures. This approach is trained and evaluated on real-world PubMed user queries and their responses based on click-through data BIBREF31 . While it is not common to use only two features for learning to rank, this approach is scalable and versatile. Adding more features subsequently should be straightforward and easy to implement. The performance result we obtain demonstrates the semantic measure is useful to rank documents according to users' interests.",
"We briefly outline learning to rank approaches BIBREF32 , BIBREF33 in this subsection. For a list of retrieved documents, i.e. for a query INLINEFORM0 and a set of candidate documents, INLINEFORM1 , we are given their relevancy judgements INLINEFORM2 , where INLINEFORM3 is a positive integer when the document INLINEFORM4 is relevant and 0 otherwise. The goal of learning to rank is to build a model INLINEFORM5 that can rank relevant documents near or at the top of the ranked retrieval list. To accomplish this, it is common to learn a function INLINEFORM6 , where INLINEFORM7 is a weight vector applied to the feature vector INLINEFORM8 . A part of learning involves learning the weight vector but the form of INLINEFORM9 may also require learning. For example, INLINEFORM10 may involve learned decision trees as in our application.",
"In particular, we use LambdaMART BIBREF25 , BIBREF26 for our experiments. LambdaMART is a pairwise learning to rank approach and is being used for PubMed relevance search. While the simplest approach (pointwise learning) is to train the function INLINEFORM0 directly, pairwise approaches seek to train the model to place correct pairs higher than incorrect pairs, i.e. INLINEFORM1 , where the document INLINEFORM2 is relevant and INLINEFORM3 is irrelevant. INLINEFORM4 indicates a margin. LambdaMART is a boosted tree version of LambdaRank BIBREF26 . An ensemble of LambdaMART, LambdaRank and logistic regression models won the Yahoo! learning to rank challenge BIBREF23 ."
],
[
"Our resulting formula from the Word Mover's Distance seeks to find the closest terms for each query word. Figure FIGREF11 depicts an example with and without using our semantic matching. For the query, “negative pressure wound therapy\", a traditional way of searching documents is to find those documents which include the words “negative\", “pressure\", “wound\" and “therapy\". As shown in the figure, the words, “pressure\" and “therapy\", cannot be found by perfect string match. On the other hand, within the same context, the semantic measure finds the closest words “NPWT\" and “therapies\" for “pressure\" and “therapy\", respectively. Identifying abbreviations and singular/plural would help match the same words, but this example is to give a general idea about the semantic matching process. Also note that using dictionaries such as synonyms and abbreviations requires an additional effort for manual annotation.",
"In the following subsections, we describe the datasets and experiments, and discuss our results."
],
[
"To evaluate our word embedding approach, we used two scientific literature datasets: TREC Genomics data and PubMed. Table TABREF13 shows the number of queries and documents in each dataset. TREC represents the benchmark sets created for the TREC 2006 and 2007 Genomics Tracks BIBREF20 , BIBREF21 . The original task is to retrieve passages relevant to topics (i.e. queries) from full-text articles, but the same set can be utilized for searching relevant PubMed documents. We consider a PubMed document relevant to a TREC query if and only if the full-text of the document contains a passage judged relevant to that query by the TREC judges. Our setup is more challenging because we only use PubMed abstracts, not full-text articles, to find evidence.",
"This is the number of PubMed documents as of Apr. 6, 2017. This number and the actual number of documents used for our experiments may differ slightly.",
"Machine learning approaches, especially supervised ones such as learning to rank, are promising and popular nowadays. Nonetheless, they usually require a large set of training examples, and such datasets are particularly difficult to find in the biomedical domain. For this reason, we created a gold standard set based on real (anonymized) user queries and the actions users subsequently took, and named this the PubMed set.",
"To build the PubMed set, we collected one year's worth of search logs and restricted the set of queries to those where users requested the relevance order and which yielded at least 20 retrieved documents. This set contained many popular but duplicate queries. Therefore, we merged queries and summed up user actions for each of them. That is, for each document stored for each query, we counted the number of times it was clicked in the retrieved set (i.e. abstract click) and the number of times users requested full-text articles (i.e. full-text click). We considered the queries that appeared less than 10 times to be less informative because they were usually very specific, and we could not collect enough user actions for training. After this step, we further filtered out non-informational queries (e.g. author and journal names). As the result, 27,870 queries remained for the final set.",
"The last step for producing the PubMed set was to assign relevance scores to documents for each query. We will do this based on user clicks. It is known that click-through data is a useful proxy for relevance judgments BIBREF34 , BIBREF35 , BIBREF36 . Let INLINEFORM0 be the number of clicks to the abstract of a document INLINEFORM1 from the results page for the query INLINEFORM2 . Let INLINEFORM3 be the number of clicks from INLINEFORM4 's abstract page to its full-text, which result from the query INLINEFORM5 . Let INLINEFORM6 be the boost factor for documents without links to full-text articles. INLINEFORM7 is the indicator function such that INLINEFORM8 if the document INLINEFORM9 includes a link to full-text articles and INLINEFORM10 otherwise. We can then calculate the relevance, INLINEFORM11 , of a document for a given query: DISPLAYFORM0 ",
" INLINEFORM0 is the trade-off between the importance of abstract clicks and full-text clicks. The last term of the relevance function gives a slight boost to documents without full-text links, so that they get a better relevance (thus rank) than those for which full-text is available but never clicked, assuming they all have the same amount of abstract clicks. We manually tuned the parameters based on user behavior and log analyses, and used the settings, INLINEFORM1 and INLINEFORM2 .",
"Compared to the TREC Genomics set, the full PubMed set is much larger, including all 27 million documents in PubMed. While the TREC and PubMed sets share essentially the same type of documents, the tested queries are quite different. The queries in TREC are a question type, e.g. “what is the role of MMS2 in cancer?\" However, the PubMed set uses actual queries from PubMed users.",
"In our experiments, the TREC set was used for evaluating BM25 and the semantic measure separately and the PubMed set was used for evaluating the learning to rank approach. We did not use the TREC set for learning to rank due to the small number of queries. Only 62 queries and 162,259 documents are available in TREC, whereas the PubMed set consists of many more queries and documents."
],
[
"We used the skip-gram model of word2vec BIBREF16 to obtain word embeddings. The alternative models such as GloVe BIBREF11 and FastText BIBREF37 are available, but their performance varies depending on tasks and is comparable to word2vec overall BIBREF38 , BIBREF39 . word2vec was trained on titles and abstracts from over 25 million PubMed documents. Word vector size and window size were set to 100 and 10, respectively. These parameters were optimized to produce high recall for synonyms BIBREF40 . Note that an independent set (i.e. synonyms) was used for tuning word2vec parameters, and the trained model is available online (https://www.ncbi.nlm.nih.gov/IRET/DATASET).",
"For experiments, we removed stopwords from queries and documents. BM25 was chosen for performance comparison and the parameters were set to INLINEFORM0 and INLINEFORM1 BIBREF41 . Among document ranking functions, BM25 shows a competitive performance BIBREF42 . It also outperforms co-occurrence based word embedding models BIBREF13 , BIBREF14 . For learning to rank approaches, 70% of the PubMed set was used for training and the rest for testing. The RankLib library (https://sourceforge.net/p/lemur/wiki/RankLib) was used for implementing LambdaMART and the PubMed experiments."
],
[
"Table TABREF17 presents the average precision of tf-idf (TFIDF), BM25, word vector centroid (CENTROID) and our embedding approach on the TREC dataset. Average precision BIBREF43 is the average of the precisions at the ranks where relevant documents appear. Relevance judgements in TREC are based on the pooling method BIBREF44 , i.e. relevance is manually assessed for top ranking documents returned by participating systems. Therefore, we only used the documents that annotators reviewed for our evaluation BIBREF1 .",
"As shown in Table TABREF17 , BM25 performs better than TFIDF and CENTROID. CENTROID maps each query and document to a vector by taking a centroid of word embedding vectors, and the cosine similarity between two vectors is used for scoring and ranking documents. As mentioned earlier, this approach is not effective when multiple topics exist in a document. From the table, the embedding approach boosts the average precision of BM25 by 19% and 6% on TREC 2006 and 2007, respectively. However, CENTROID provides scores lower than BM25 and SEM approaches.",
"Although our approach outperforms BM25 on TREC, we do not claim that BM25 and other traditional approaches can be completely replaced with the semantic method. We see the semantic approach as a means to narrow the gap between words in documents and those in queries (or users' intentions). This leads to the next experiment using our semantic measure as a feature for ranking in learning to rank."
],
[
"For the PubMed dataset, we used learning to rank to combine BM25 and our semantic measure. An advantage of using learning to rank is its flexibility to add more features and optimize performance by learning their importance. PubMed documents are semi-structured, consisting of title, abstract and many more fields. Since our interest lies in text, we only used titles and abstracts, and applied learning to rank in two different ways: 1) to find semantically closest words in titles (BM25 + SEMTitle) and 2) to find semantically closest words in abstracts (BM25 + SEMAbstract). Although our semantic measure alone produces better ranking scores on the TREC set, this does not apply to user queries in PubMed. It is because user queries are often short, including around three words on average, and the semantic measure cannot differentiate documents when they include all query words.",
"Table TABREF19 shows normalized discounted cumulative gain (NDCG) scores for top 5, 10 and 20 ranked documents for each approach. NDCG BIBREF45 is a measure for ranking quality and it penalizes relevant documents appearing in lower ranks by adding a rank-based discount factor. In the table, reranking documents by learning to rank performs better than BM25 overall, however the larger gain is obtained from using titles (BM25 + SEMTitle) by increasing NDCG@20 by 23%. NDCG@5 and NDCG@10 also perform better than BM25 by 23% and 25%, respectively. It is not surprising that SEMTitle produces better performance than SEMAbstract. The current PubMed search interface does not allow users to see abstracts on the results page, hence users click documents mostly based on titles. Nevertheless, it is clear that the abstract-based semantic distance helps achieve better performance.",
"After our experiments for Table TABREF19 , we also assessed the efficiency of learning to rank (BM25 + SEMTitle) by measuring query processing speed in PubMed relevance search. Using 100 computing threads, 900 queries are processed per second, and for each query, the average processing time is 100 milliseconds, which is fast enough to be used in the production system."
],
[
"We presented a word embedding approach for measuring similarity between a query and a document. Starting from the Word Mover's Distance, we reinterpreted the model for a query-document search problem. Even with the INLINEFORM0 flow only, the word embedding approach is already efficient and effective. In this setup, the proposed approach cannot distinguish documents when they include all query words, but surprisingly, the word embedding approach shows remarkable performance on the TREC Genomics datasets. Moreover, applied to PubMed user queries and click-through data, our semantic measure allows to further improves BM25 ranking performance. This demonstrates that the semantic measure is an important feature for IR and is closely related to user clicks.",
"While many deep learning solutions have been proposed recently, their slow training and lack of flexibility to adopt various features limit real-world use. However, our approach is more straightforward and can be easily added as a feature in the current PubMed relevance search framework. Proven by our PubMed search results, our semantic measure improves ranking performance without adding much overhead to the system."
],
[
"This research was supported by the Intramural Research Program of the NIH, National Library of Medicine."
]
],
"section_name": [
"Introduction",
"Methods",
"Word Mover's Distance",
"Our Query-Document Similarity Measure",
"Learning to Rank",
"Results and Discussion",
"Datasets",
"Word Embeddings and Other Experimental Setup",
"TREC Experiments",
"PubMed Experiments",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"92b5fcebb1b4123ae47449f31448ea46c3977b71",
"e33dd01edd31c2754ab545dcb11f80b24e20b979"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"As shown in Table TABREF17 , BM25 performs better than TFIDF and CENTROID. CENTROID maps each query and document to a vector by taking a centroid of word embedding vectors, and the cosine similarity between two vectors is used for scoring and ranking documents. As mentioned earlier, this approach is not effective when multiple topics exist in a document. From the table, the embedding approach boosts the average precision of BM25 by 19% and 6% on TREC 2006 and 2007, respectively. However, CENTROID provides scores lower than BM25 and SEM approaches."
],
"extractive_spans": [
"embedding approach boosts the average precision of BM25 by 19% and 6% on TREC 2006 and 2007"
],
"free_form_answer": "",
"highlighted_evidence": [
"From the table, the embedding approach boosts the average precision of BM25 by 19% and 6% on TREC 2006 and 2007, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a4c57100bcbe8974087504541acb5cf74532b04d",
"fdedb72a85f1df34719bc3f5316ecd7837e4c871"
],
"answer": [
{
"evidence": [
"First, following Kusner et al. Kusner2015, documents are represented by normalized bag-of-words (BOW) vectors, i.e. if a word INLINEFORM0 appears INLINEFORM1 times in a document, the weight is DISPLAYFORM0",
"where INLINEFORM0 is number of words in the document. The higher the weight, the more important the word. They assume a word embedding so that each word INLINEFORM1 has an associated vector INLINEFORM2 . The dissimilarity INLINEFORM3 between INLINEFORM4 and INLINEFORM5 is then calculated by DISPLAYFORM0"
],
"extractive_spans": [
"documents are represented by normalized bag-of-words (BOW) vectors"
],
"free_form_answer": "",
"highlighted_evidence": [
"First, following Kusner et al. Kusner2015, documents are represented by normalized bag-of-words (BOW) vectors, i.e. if a word INLINEFORM0 appears INLINEFORM1 times in a document, the weight is DISPLAYFORM0\n\nwhere INLINEFORM0 is number of words in the document. The higher the weight, the more important the word. They assume a word embedding so that each word INLINEFORM1 has an associated vector INLINEFORM2 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"First, following Kusner et al. Kusner2015, documents are represented by normalized bag-of-words (BOW) vectors, i.e. if a word INLINEFORM0 appears INLINEFORM1 times in a document, the weight is DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "normalized bag-of-words vectors",
"highlighted_evidence": [
"First, following Kusner et al. Kusner2015, documents are represented by normalized bag-of-words (BOW) vectors, i.e. if a word INLINEFORM0 appears INLINEFORM1 times in a document, the weight is DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0768ad749992ae1cdc0329f42a7fc6ea484e570f",
"f07aca9d391327eea56a50e7f996f7fb802e923e"
],
"answer": [
{
"evidence": [
"In our study, we use learning to rank to merge two distinctive features, BM25 scores and our semantic measures. This approach is trained and evaluated on real-world PubMed user queries and their responses based on click-through data BIBREF31 . While it is not common to use only two features for learning to rank, this approach is scalable and versatile. Adding more features subsequently should be straightforward and easy to implement. The performance result we obtain demonstrates the semantic measure is useful to rank documents according to users' interests."
],
"extractive_spans": [],
"free_form_answer": "They merge features of BM25 and semantic measures.",
"highlighted_evidence": [
"In our study, we use learning to rank to merge two distinctive features, BM25 scores and our semantic measures. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the PubMed dataset, we used learning to rank to combine BM25 and our semantic measure. An advantage of using learning to rank is its flexibility to add more features and optimize performance by learning their importance. PubMed documents are semi-structured, consisting of title, abstract and many more fields. Since our interest lies in text, we only used titles and abstracts, and applied learning to rank in two different ways: 1) to find semantically closest words in titles (BM25 + SEMTitle) and 2) to find semantically closest words in abstracts (BM25 + SEMAbstract). Although our semantic measure alone produces better ranking scores on the TREC set, this does not apply to user queries in PubMed. It is because user queries are often short, including around three words on average, and the semantic measure cannot differentiate documents when they include all query words."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For the PubMed dataset, we used learning to rank to combine BM25 and our semantic measure."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"67a268c64a242a2810f0bc3fc162091009021ca9",
"8e93a908ca00700caa1a8c37fbcf10282f5c0685"
],
"answer": [
{
"evidence": [
"We used the skip-gram model of word2vec BIBREF16 to obtain word embeddings. The alternative models such as GloVe BIBREF11 and FastText BIBREF37 are available, but their performance varies depending on tasks and is comparable to word2vec overall BIBREF38 , BIBREF39 . word2vec was trained on titles and abstracts from over 25 million PubMed documents. Word vector size and window size were set to 100 and 10, respectively. These parameters were optimized to produce high recall for synonyms BIBREF40 . Note that an independent set (i.e. synonyms) was used for tuning word2vec parameters, and the trained model is available online (https://www.ncbi.nlm.nih.gov/IRET/DATASET)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We used the skip-gram model of word2vec BIBREF16 to obtain word embeddings.",
" word2vec was trained on titles and abstracts from over 25 million PubMed documents."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We used the skip-gram model of word2vec BIBREF16 to obtain word embeddings. The alternative models such as GloVe BIBREF11 and FastText BIBREF37 are available, but their performance varies depending on tasks and is comparable to word2vec overall BIBREF38 , BIBREF39 . word2vec was trained on titles and abstracts from over 25 million PubMed documents. Word vector size and window size were set to 100 and 10, respectively. These parameters were optimized to produce high recall for synonyms BIBREF40 . Note that an independent set (i.e. synonyms) was used for tuning word2vec parameters, and the trained model is available online (https://www.ncbi.nlm.nih.gov/IRET/DATASET)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We used the skip-gram model of word2vec BIBREF16 to obtain word embeddings. The alternative models such as GloVe BIBREF11 and FastText BIBREF37 are available, but their performance varies depending on tasks and is comparable to word2vec overall BIBREF38 , BIBREF39 . word2vec was trained on titles and abstracts from over 25 million PubMed documents"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"By how much does their similarity measure outperform BM25?",
"How do they represent documents when using their proposed similarity measure?",
"How do they propose to combine BM25 and word embedding similarity?",
"Do they use pretrained word embeddings to calculate Word Mover's distance?"
],
"question_id": [
"2973fe3f5b4bf70ada02ac4a9087dd156cc3016e",
"42269ed04e986ec5dc4164bf57ef306aec4a1ae1",
"31a3ec8d550054465e55a26b0136f4d50d72d354",
"a7e1b13cc42bfe78d37b9c943de6288e5f00f01b"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Number of documents in the TREC and PubMed datasets.",
"Table 2: Average precision of BM25 and our embedding approach (EMBED) on the TREC set.",
"Table 3: Average precision of BM25 and our embedding approach (EMBED) on the PubMed set.",
"Table 4: Average precision of BM25 and our approach (EMBED) on the TREC set. All documents in the set are scored.",
"Figure 1: TREC performance changes depending on α."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Figure1-1.png"
]
} | [
"How do they propose to combine BM25 and word embedding similarity?"
] | [
[
"1608.01972-PubMed Experiments-0",
"1608.01972-Learning to Rank-0"
]
] | [
"They merge features of BM25 and semantic measures."
] | 267 |
1705.01306 | Amobee at SemEval-2017 Task 4: Deep Learning System for Sentiment Detection on Twitter | This paper describes the Amobee sentiment analysis system, adapted to compete in SemEval 2017 task 4. The system consists of two parts: a supervised training of RNN models based on a Twitter sentiment treebank, and the use of feedforward NN, Naive Bayes and logistic regression classifiers to produce predictions for the different sub-tasks. The algorithm reached the 3rd place on the 5-label classification task (sub-task C). | {
"paragraphs": [
[
"Sentiment detection is the process of determining whether a text has a positive or negative attitude toward a given entity (topic) or in general. Detecting sentiment on Twitter—a social network where users interact via short 140-character messages, exchanging information and opinions—is becoming ubiquitous. Sentiment in Twitter messages (tweets) can capture the popularity level of political figures, ideas, brands, products and people. Tweets and other social media texts are challenging to analyze as they are inherently different; use of slang, mis-spelling, sarcasm, emojis and co-mentioning of other messages pose unique difficulties. Combined with the vast amount of Twitter data (mostly public), these make sentiment detection on Twitter a focal point for data science research.",
"SemEval is a yearly event in which teams compete in natural language processing tasks. Task 4 is concerned with sentiment analysis in Twitter; it contains five sub-tasks which include classification of tweets according to 2, 3 or 5 labels and quantification of sentiment distribution regarding topics mentioned in tweets; for a complete description of task 4 see BIBREF0 .",
"This paper describes our system and participation in all sub-tasks of SemEval 2017 task 4. Our system consists of two parts: a recurrent neural network trained on a private Twitter dataset, followed by a task-specific combination of model stacking and logistic regression classifiers.",
"The paper is organized as follows: section SECREF2 describes the training of RNN models, data being used and model selection; section SECREF3 describes the extraction of semantic features; section SECREF4 describes the task-specific workflows and scores. We review and summarize in section SECREF5 . Finally, section SECREF6 describes our future plans, mainly the development of an LSTM algorithm."
],
[
"The first part of our system consisted of training recursive-neural-tensor-network (RNTN) models BIBREF1 ."
],
[
"Our training data for this part was created by taking a random sample from Twitter and having it manually annotated on a 5-label basis to produce fully sentiment-labeled parse-trees, much like the Stanford sentiment treebank. The sample contains twenty thousand tweets with sentiment distribution as following:",
"",
""
],
[
"First we build a custom dictionary by means of crawling Wikipedia and extracting lists of brands, celebrities, places and names. The lists were then pruned manually. Then we define the following steps when preprocessing tweets:",
"Standard tokenization of the sentences, using the Stanford coreNLP tools BIBREF2 .",
"Word-replacement step using the Wiki dictionary with representative keywords.",
"Lemmatization, using coreNLP.",
"Emojis: removing duplicate emojis, clustering them according to sentiment and replacing them with representative keywords, e.g. “happy-emoji”.",
"Regex: removing duplicate punctuation marks, replacing URLs with a keyword, removing Camel casing.",
"Parsing: parts-of-speech and constituency parsing using a shift-reduce parser, which was selected for its speed over accuracy.",
"NER: using entity recognition annotator, replacing numbers, dates and locations with representative keywords.",
"Wiki: second step of word-replacement using our custom wiki dictionary."
],
[
"We used the Stanford coreNLP sentiment annotator, introduced by BIBREF1 . Words are initialized either randomly as INLINEFORM0 dimensional vectors, or given externally as word vectors. We used four versions of the training data; with and without lemmatization and with and without pre-trained word representations BIBREF3 ."
],
[
"Twitter messages can be comprised of several sentences, with different and sometimes contrary sentiments. However, the trained models predict sentiment on individual sentences. We aggregated the sentiment for each tweet by taking a linear combination of the individual sentences comprising the tweet with weights having the following power dependency: DISPLAYFORM0 ",
" where INLINEFORM0 are numerical factors to be found, INLINEFORM1 are the fraction of known words, length of the sentence and polarity, respectively, with polarity defined by: DISPLAYFORM0 ",
" where vn, n, p, vp are the probabilities as assigned by the RNTN for very-negative, negative, positive and very-positive label for each sentence. We then optimized the parameters INLINEFORM0 with respect to the true labels."
],
[
"After training dozens of models, we chose to combine only the best ones using stacking, namely combining the models output using a supervised learning algorithm. For this purpose, we used the Scikit-learn BIBREF4 recursive feature elimination (RFE) algorithm to find both the optimal number and the actual models, thus choosing the best five models. The models chosen include a representative from each type of the data we used and they were:",
"Training data without lemmatization step, with randomly initialized word-vectors of size 27.",
"Training data with lemmatization step, with pre-trained word-vectors of size 25.",
"3 sets of training data with lemmatization step, with randomly initialized word-vectors of sizes 24, 26.",
"The five models output is concatenated and used as input for the various tasks, as described in SECREF27 ."
],
[
"In addition to the RNN trained models, our system includes feature extraction step; we defined a set of lexical and semantical features to be extracted from the original tweets:",
"For this purpose, we used the Stanford deterministic coreference resolution system BIBREF5 , BIBREF6 ."
],
[
"The experiments were developed by using Scikit-learn machine learning library and Keras deep learning library with TensorFlow backend BIBREF7 . Results for all sub-tasks are summarized in table",
" TABREF26 ."
],
[
"For each tweet, we first ran the RNN models and got a 5-category probability distribution from each of the trained models, thus a 25-dimensional vector. Then we extracted sentence features and concatenated them with the RNN vector. We then trained a Feedforward NN which outputs a 5-label probability distribution for each tweet. That was the starting point for each of the tasks; we refer to this process as the pipeline."
],
[
"The goal of this task is to classify tweets sentiment into three classes (negative, neutral, positive) where the measured metric is a macro-averaged recall.",
"We used the SemEval 2017 task A data in the following way: using SemEval 2016 TEST as our TEST, partitioning the rest into TRAIN and DEV datasets. The test dataset went through the previously mentioned pipeline, getting a 5-label probability distribution.",
"We anticipated the sentiment distribution of the test data would be similar to the training data—as they may be drawn from the same distribution. Therefore we used re-sampling of the training dataset to obtain a skewed dataset such that a logistic regression would predict similar sentiment distributions for both the train and test datasets. Finally we trained a logistic regression on the new dataset and used it on the task A test set. We obtained a macro-averaged recall score of INLINEFORM0 and accuracy of INLINEFORM1 .",
"Apparently, our assumption about distribution similarity was misguided as one can observe in the next table.",
"",
""
],
[
"The goals of these tasks are to classify tweets sentiment regarding a given entity as either positive or negative (task B) and estimate sentiment distribution for each entity (task D). The measured metrics are macro-averaged recall and KLD, respectively.",
"We started with the training data passing our pipeline. We calculated the mean distribution for each entity on the training and testing datasets. We trained a logistic regression from a 5-label to a binary distribution and predicted a positive probability for each entity in the test set. This was used as a prior distribution for each entity, modeled as a Beta distribution. We then trained a logistic regression where the input is a concatenation of the 5-labels with the positive component of the probability distribution of the entity's sentiment and the output is a binary prediction for each tweet. Then we chose the label—using the mean positive probability as a threshold. These predictions are submitted as task B. We obtained a macro-averaged recall score of INLINEFORM0 and accuracy of INLINEFORM1 .",
"Next, we took the predictions mean for each entity as the likelihood, modeled as a Binomial distribution, thus getting a Beta posterior distribution for each entity. These were submitted as task D. We obtained a score of INLINEFORM0 ."
],
[
"The goals of these tasks are to classify tweets sentiment regarding a given entity into five classes—very negative, negative, neutral, positive, very positive—(task C) and estimate sentiment distribution over five classes for each entity (task E). The measured metrics are macro-averaged MAE and earth-movers-distance (EMD), respectively.",
"We first calculated the mean sentiment for each entity. We then used bootstrapping to generate a sample for each entity. Then we trained a logistic regression model which predicts a 5-label distribution for each entity. We modified the initial 5-label probability distribution for each tweet using the following formula: DISPLAYFORM0 ",
" where INLINEFORM0 are the current tweet and label, INLINEFORM1 is the sentiment prediction of the logistic regression model for an entity, INLINEFORM2 is the set of all tweets and INLINEFORM3 is the set of labels. We trained a logistic regression on the new distribution and the predictions were submitted as task C. We obtained a macro-averaged MAE score of INLINEFORM4 .",
"Next, we defined a loss function as follows: DISPLAYFORM0 ",
" where the probabilities are the predicted probabilities after the previous logistic regression step. Finally we predicted a label for each tweet according to the lowest loss, and calculated the mean sentiment for each entity. These were submitted as task E. We obtained a score of INLINEFORM0 ."
],
[
"In this paper we described our system of sentiment analysis adapted to participate in SemEval task 4. The highest ranking we reached was third place on the 5-label classification task. Compared with classification with 2 and 3 labels, in which we scored lower, and the fact we used similar workflow for tasks A, B, C, we speculate that the relative success is due to our sentiment treebank ranking on a 5-label basis. This can also explain the relatively superior results in quantification of 5 categories as opposed to quantification of 2 categories.",
"Overall, we have had some unique advantages and disadvantages in this competition. On the one hand, we enjoyed an additional twenty thousand tweets, where every node of the parse tree was labeled for its sentiment, and also had the manpower to manually prune our dictionaries, as well as the opportunity to get feedback from our clients. On the other hand, we did not use any user information and/or metadata from Twitter, nor did we use the SemEval data for training the RNTN models. In addition, we did not ensemble our models with any commercially or freely available pre-trained sentiment analysis packages."
],
[
"We have several plans to improve our algorithm and to use new data. First, we plan to extract more semantic features such as verb and adverb classes and use them in neural network models as additional input. Verb classification was used to improve sentiment detection BIBREF8 ; we plan to label verbs according to whether their sentiment changes as we change the tense, form and active/passive voice. Adverbs were also used to determine sentiment BIBREF9 ; we plan to classify adverbs into sentiment families such as intensifiers (“very”), diminishers (“slightly”), positive (“delightfully”) and negative (“shamefully”).",
"Secondly, we can use additional data from Twitter regarding either the users or the entities-of-interest.",
"Finally, we plan to implement a long short-term memory (LSTM) network BIBREF10 which trains on a sentence together with all the syntax and semantic features extracted from it. There is some work in the field of semantic modeling using LSTM, e.g. BIBREF11 , BIBREF12 . Our plan is to use an LSTM module to extend the RNTN model of BIBREF1 by adding the additional semantic data of each phrase and a reference to the entity-of-interest. An illustration of the computational graph for the proposed model is presented in figure FIGREF33 . The inputs/outputs are: INLINEFORM0 is a word vector representation of dimension INLINEFORM1 , INLINEFORM2 encodes the parts-of-speech (POS) tagging, syntactic category and an additional bit indicating whether the entity-of-interest is present in the expression—all encoded in a 7 dimensional vector, INLINEFORM3 is a control channel of dimension INLINEFORM4 , INLINEFORM5 is an output layer of dimension INLINEFORM6 and INLINEFORM7 is a sentiment vector of dimension INLINEFORM8 .",
"The module functions are defined as following: DISPLAYFORM0 ",
" where INLINEFORM0 is a matrix to be learnt, INLINEFORM1 denotes Hadamard (element-wise) product and INLINEFORM2 denotes concatenation. The functions INLINEFORM3 are the six NN computations, given by: DISPLAYFORM0 ",
" where INLINEFORM0 are the INLINEFORM1 dimensional word embedding, 6-bit encoding of the syntactic category and an indication bit of the entity-of-interest for the INLINEFORM2 th phrase, respectively, INLINEFORM3 encodes the inputs of a left descendant INLINEFORM4 and a right descendant INLINEFORM5 in a parse tree and INLINEFORM6 . Define INLINEFORM7 , then INLINEFORM8 is a tensor defining bilinear forms, INLINEFORM9 with INLINEFORM10 are indication functions for having the entity-of-interest on the left and/or right child and INLINEFORM11 are matrices to be learnt.",
"The algorithm processes each tweet according to its parse tree, starting at the leaves and going up combining words into expressions; this is different than other LSTM algorithms since the parsing data is used explicitly. As an example, figure FIGREF36 presents the simple sentence “Amobee is awesome” with its parsing tree. The leaves are given by INLINEFORM0 -dimensional word vectors together with their POS tagging, syntactic categories (if defined for the leaf) and an entity indicator bit. The computation takes place in the inner nodes; “is” and “awesome” are combined in a node marked by “VP” which is the phrase category. In terms of our terminology, “is” and “awesome” are the INLINEFORM1 nodes, respectively for “VP” node calculation. We define INLINEFORM2 as the cell's state for the left child, in this case the “is” node. Left and right are concatenated as input INLINEFORM3 and the metadata INLINEFORM4 is from the right child while INLINEFORM5 is the metadata from the left child. The second calculation takes place at the root “S”; the input INLINEFORM6 is now a concatenation of “Amobee” word vector, the input INLINEFORM7 holds the INLINEFORM8 output of the previous step in node “VP”; the cell state INLINEFORM9 comes from the “Amobee” node."
]
],
"section_name": [
"Introduction",
"RNN Models",
"Data",
"Preprocessing",
"Training",
"Tweet Aggregation",
"Model Selection",
"Features Extraction",
"Experiments",
"General Workflow",
"Task A",
"Tasks B, D",
"Tasks C, E",
"Review and Conclusions",
"Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"1141d032fd5732ff6533681606dd65dad5b4a687",
"7312bbe11d56848206ee5b40ac6c55034ab19d9c"
],
"answer": [
{
"evidence": [
"Our training data for this part was created by taking a random sample from Twitter and having it manually annotated on a 5-label basis to produce fully sentiment-labeled parse-trees, much like the Stanford sentiment treebank. The sample contains twenty thousand tweets with sentiment distribution as following:"
],
"extractive_spans": [],
"free_form_answer": "They built their own",
"highlighted_evidence": [
"Our training data for this part was created by taking a random sample from Twitter and having it manually annotated on a 5-label basis to produce fully sentiment-labeled parse-trees, much like the Stanford sentiment treebank."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our training data for this part was created by taking a random sample from Twitter and having it manually annotated on a 5-label basis to produce fully sentiment-labeled parse-trees, much like the Stanford sentiment treebank. The sample contains twenty thousand tweets with sentiment distribution as following:"
],
"extractive_spans": [
"Our training data for this part was created by taking a random sample from Twitter and having it manually annotated on a 5-label basis to produce fully sentiment-labeled parse-trees"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our training data for this part was created by taking a random sample from Twitter and having it manually annotated on a 5-label basis to produce fully sentiment-labeled parse-trees, much like the Stanford sentiment treebank."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2a10c6a6425c65f83c731097fa6512933ab5f22a",
"837d9de554d0bbdd3f04e3d3b23f2d05a056439b"
],
"answer": [
{
"evidence": [
"In this paper we described our system of sentiment analysis adapted to participate in SemEval task 4. The highest ranking we reached was third place on the 5-label classification task. Compared with classification with 2 and 3 labels, in which we scored lower, and the fact we used similar workflow for tasks A, B, C, we speculate that the relative success is due to our sentiment treebank ranking on a 5-label basis. This can also explain the relatively superior results in quantification of 5 categories as opposed to quantification of 2 categories."
],
"extractive_spans": [
"which we scored lower"
],
"free_form_answer": "",
"highlighted_evidence": [
"Compared with classification with 2 and 3 labels, in which we scored lower, and the fact we used similar workflow for tasks A, B, C, we speculate that the relative success is due to our sentiment treebank ranking on a 5-label basis."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"07b1ee4ffed702f154a8faedea888e96a6422dd4",
"e7cd9238e169be4eafd24fe2ee1e8de9d5918409"
],
"answer": [
{
"evidence": [
"The goals of these tasks are to classify tweets sentiment regarding a given entity into five classes—very negative, negative, neutral, positive, very positive—(task C) and estimate sentiment distribution over five classes for each entity (task E). The measured metrics are macro-averaged MAE and earth-movers-distance (EMD), respectively."
],
"extractive_spans": [
"very negative, negative, neutral, positive, very positive"
],
"free_form_answer": "",
"highlighted_evidence": [
"The goals of these tasks are to classify tweets sentiment regarding a given entity into five classes—very negative, negative, neutral, positive, very positive—(task C) and estimate sentiment distribution over five classes for each entity (task E)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The goals of these tasks are to classify tweets sentiment regarding a given entity into five classes—very negative, negative, neutral, positive, very positive—(task C) and estimate sentiment distribution over five classes for each entity (task E). The measured metrics are macro-averaged MAE and earth-movers-distance (EMD), respectively."
],
"extractive_spans": [
"very negative",
"negative",
"neutral",
"positive",
"very positive"
],
"free_form_answer": "",
"highlighted_evidence": [
"The goals of these tasks are to classify tweets sentiment regarding a given entity into five classes—very negative, negative, neutral, positive, very positive—(task C) and estimate sentiment distribution over five classes for each entity (task E). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which Twitter sentiment treebank is used?",
"Where did the system place in the other sub-tasks?",
"What were the five labels to be predicted in sub-task C?"
],
"question_id": [
"49cd18448101da146c3187a44412628f8c722d7b",
"e9260f6419c35cbd74143f658dbde887ef263886",
"2834a340116026d5995e537d474a47d6a74c3745"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Summary of evaluation results, metrics used and rank achieved, for all sub tasks. ρ is macro-averaged recall, MAEM is macro-averaged mean absolute error, KLD is Kullback-Leibler divergence and EMD is earth-movers distance.",
"Figure 1: LSTM module; round purple nodes are element-wise operations, turquoise rectangles are neural network layers, orange rhombus is a dim-reducing matrix, splitting line is duplication, merging lines is concatenation.",
"Figure 2: Constituency-based parse tree; the LSTM module runs on the internal nodes by concatenating the left and right nodes as its input."
],
"file": [
"4-Table1-1.png",
"5-Figure1-1.png",
"6-Figure2-1.png"
]
} | [
"Which Twitter sentiment treebank is used?"
] | [
[
"1705.01306-Data-0"
]
] | [
"They built their own"
] | 268 |
2003.13028 | Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models | Pre-trained sequence-to-sequence (seq-to-seq) models have significantly improved the accuracy of several language generation tasks, including abstractive summarization. Although the fluency of abstractive summarization has been greatly improved by fine-tuning these models, it is not clear whether they can also identify the important parts of the source text to be included in the summary. In this study, we investigated the effectiveness of combining saliency models that identify the important parts of the source text with the pre-trained seq-to-seq models through extensive experiments. We also proposed a new combination model consisting of a saliency model that extracts a token sequence from a source text and a seq-to-seq model that takes the sequence as an additional input text. Experimental results showed that most of the combination models outperformed a simple fine-tuned seq-to-seq model on both the CNN/DM and XSum datasets even if the seq-to-seq model is pre-trained on large-scale corpora. Moreover, for the CNN/DM dataset, the proposed combination model exceeded the previous best-performed model by 1.33 points on ROUGE-L. | {
"paragraphs": [
[
"Pre-trained language models such as BERT BIBREF0 have significantly improved the accuracy of various language processing tasks. However, we cannot apply BERT to language generation tasks as is because its model structure is not suitable for language generation. Several pre-trained seq-to-seq models for language generation BIBREF1, BIBREF2 based on an encoder-decoder Transformer model, which is a standard model for language generation, have recently been proposed. These models have achieved blackstate-of-the-art results in various language generation tasks, including abstractive summarization.",
"However, when generating a summary, it is essential to correctly predict which part of the source text should be included in the summary. Some previous studies without pre-training have examined combining extractive summarization with abstractive summarization BIBREF3, BIBREF4. Although pre-trained seq-to-seq models have achieved higher accuracy compared to previous models, it is not clear whether modeling “Which part of the source text is important?” can be learned through pre-training.",
"blackThe purpose of this study is to clarify the blackeffectiveness of combining saliency models that identify the important part of the source text with a pre-trained seq-to-seq model in the abstractive summarization task. Our main contributions are as follows:",
"We investigated nine combinations of pre-trained seq-to-seq and token-level saliency models, where the saliency models share the parameters with the encoder of the seq-to-seq model or extract important tokens independently of the encoder.",
"We proposed a new combination model, the conditional summarization model with important tokens (CIT), in which a token sequence extracted by a saliency model is explicitly given to a seq-to-seq model as an additional input text.",
"We evaluated the combination models on the CNN/DM BIBREF5 and XSum BIBREF6 datasets. Our CIT model outperformed a simple fine-tuned model in terms of ROUGE scores on both datasets."
],
[
"Our study focuses on two tasks: abstractive summarization and blacksaliency detection. The main task is abstractive summarization and the sub task is blacksaliency detection, which is the prediction of important parts of the source text. The problem formulations of each task are described below.",
"Task 1 (Abstractive summarization) Given the source text $X$, the output is an abstractive summary $Y$ = $(y_1,\\ldots ,y_T)$.",
"Task 2 (Saliency detection) Given the source text $X$ with $L$ words $X$= $(x_1,\\dots ,x_L)$, the output is the saliency score $S = \\lbrace S_1, S_2, ... S_L \\rbrace $.",
"In this study, we investigate several combinations of models for these two tasks."
],
[
"There are several pre-trained seq-to-seq models applied for abstractive summarization BIBREF7, BIBREF8, BIBREF2. The models use a simple Transformer-based encoder-decoder model BIBREF9 in which the encoder-decoder model is pre-trained on large unlabeled data."
],
[
"In this work, we define the Transformer-based encoder-decoder model as follows."
],
[
"The encoder consists of $M$ layer encoder blocks. The input of the encoder is $X = \\lbrace x_i, x_2, ... x_L \\rbrace $. The output through the $M$ layer encoder blocks is defined as",
"The encoder block consists of a self-attention module and a two-layer feed-forward network."
],
[
"The decoder consists of $M$ layer decoder blocks. The inputs of the decoder are the output of the encoder $H_e^M$ and the output of the previous step of the decoder $\\lbrace y_1,...,y_{t-1} \\rbrace $. The output through the $M$ layer Transformer decoder blocks is defined as",
"In each step $t$, the $h_{dt}^M$ is projected to blackthe vocabulary space and the decoder outputs the highest probability token as the next token. The Transformer decoder block consists of a self-attention module, a context-attention module, and a two-layer feed-forward network."
],
[
"The encoder and decoder blocks use multi-head attention, which consists of a combination of $K$ attention heads and is denoted as $\\mathrm {Multihead}(Q, K, V) = \\mathrm {Concat}(\\mathrm {head}_1, ...,\\mathrm {head}_k)W^o$, where each head is $\\mathrm {head}_i = \\mathrm {Attention}(QW_i^Q, KW_i^K , VW_i^V)$.",
"The weight matrix $A$ in each attention-head $\\mathrm {Attention}(\\tilde{Q}, \\tilde{K}, \\tilde{V}) = A \\tilde{V}$ is defined as",
"where $d_k = d / k$, $\\tilde{Q} \\in \\mathbb {R}^{I \\times d}$, $\\tilde{K}, \\tilde{V} \\in \\mathbb {R}^{J \\times d}$.",
"In the $m$-th layer of self-attention, the same representation $H^m_{\\cdot }$ is given to $Q$, $K$, and $V$. In the context-attention, we give $H^m_d$ to $Q$ and $H^M_e$ to $K$ and $V$."
],
[
"To fine-tune the seq-to-seq model for abstractive summarization, we use cross entropy loss as",
"where $N$ is the number of training samples."
],
[
"Several studies have proposed the combination of a token-level saliency model and a seq-to-seq model, blackwhich is not pre-trained, and reported its effectiveness BIBREF3, BIBREF10. We also use a simple token-level saliency model blackas a basic model in this study."
],
[
"A basic saliency model consists of $M$-layer Transformer encoder blocks ($\\mathrm {Encoder}_\\mathrm {sal}$) and a single-layer feed-forward network. We define the saliency score of the $l$-th token ($1 \\le l \\le L$) in the source text as",
"where ${\\rm Encoder_{sal}()}$ represents the output of the last layer of black$\\rm Encoder_{sal}$, $W_1 \\in \\mathbb {R}^{d}$ and $b_1$ are learnable parameters, and $\\sigma $ represents a sigmoid function."
],
[
"In this study, we use two types of saliency model for combination: a shared encoder and an extractor. Each model structure is based on the basic saliency model. We describe them below."
],
[
"The shared encoder blackshares the parameters of black$\\rm Encoder_{sal}$ and the encoder of the seq-to-seq model. This model is jointly trained with the seq-to-seq model and the saliency score is used to bias the representations of the seq-to-seq model."
],
[
"The extractor extracts the important tokens or sentences from the source text on the basis of the saliency score. The extractor is separated with the seq-to-seq model, and each model is trained independently."
],
[
"The saliency model predicts blackthe saliency score $S_l$ for each token $x_l$. If there is a reference label $r_l$ $\\in \\lbrace 0, 1\\rbrace $ for each $x_l$, we can train the saliency model in a supervised manner. However, the reference label for each token is typically not given, since the training data for the summarization consists of only the source text and its reference summary. Although there are no reference saliency labels, we can make pseudo reference labels by aligning both source and summary token sequences and extracting common tokens BIBREF3. blackWe used pseudo labels when we train the saliency model in a supervised manner."
],
[
"To train the saliency model in a supervised way blackwith pseudo reference labels, we use binary cross entropy loss as",
"where $r_l^n$ is a pseudo reference label of token $x_l$ in the $n$-th sample."
],
[
"This section describes nine combinations of the pre-trained seq-to-seq model and saliency models."
],
[
"We roughly categorize the combinations into three types. Figure FIGREF23 shows an image of each combination.",
"The first type uses the shared encoder (§SECREF26). These models consist of the shared encoder and the decoder, where the shared encoder module blackplays two roles: saliency detection and the encoding of the seq-to-seq model. blackThe saliency scores are used to bias the representation of the seq-to-seq model for several models in this type.",
"The second type uses the extractor (§SECREF34, §SECREF37). These models consist of the extractor, encoder, and decoder and follow two steps: first, blackthe extractor blackextracts the important tokens or sentences from the source text, and second, blackthe encoder uses them as an input of the seq-to-seq models. Our proposed model (CIT) belongs to this type.",
"The third type uses both the shared encoder and the extractor (§SECREF39). These models consist of the extractor, shared encoder, and decoder and also follow two steps: first, blackthe extractor extracts the important tokens from the source text, and second, blackthe shared encoder uses them as an input of the seq-to-seq model."
],
[
"From the viewpoint of the loss function, there are two major types of model: those that use the saliency loss (§SECREF21) and those that do not. We also denote the loss function for the seq-to-seq model as $L_\\mathrm {abs}$ and the loss function for the extractor as $L_\\mathrm {ext}$. black$L_\\mathrm {ext}$ is trained with $L_\\mathrm {sal}$, and $L_\\mathrm {abs}$ is trained with $L_\\mathrm {sum}$ or $L_\\mathrm {sum} + L_\\mathrm {sal}$."
],
[
"This model trains the shared encoder and the decoder by minimizing both the summary and saliency losses. The loss function of this model is $L_\\mathrm {abs} = L_\\mathrm {sum} + L_\\mathrm {sal}$."
],
[
"This model uses the saliency score to weight the shared encoder output. Specifically, the final output $h_{el}^M$ of the shared encoder is weighted as",
"Then, we replace the input of the decoder $h_{el}^M$ with $\\tilde{h}_{el}^{M}$. Although BIBREF10 used BiGRU, we use Transformer for fair comparison. The loss function of this model is $L_\\mathrm {abs} = L_\\mathrm {sum}.$"
],
[
"This model has the same structure as the SE. The loss function of this model is $L_\\mathrm {abs} = L_\\mathrm {sum} + L_\\mathrm {sal}$."
],
[
"This model weights the attention scores of the decoder side, unlike the SE model. Specifically, the attention score $a_{i}^{t} \\in \\mathbb {R}^L$ in each step $t$ is weighted by $S_l$. $a_{i}^{t}$ is a $t$-th row of $A_i \\in \\mathbb {R}^{T \\times L}$, which is a weight matrix of the $i$-th attention head in the context-attention (Eq. (DISPLAY_FORM12)).",
"BIBREF3 took a similar approach in that their model weights the copy probability of a pointer-generator model. However, as the pre-trained seq-to-seq model does not have a copy mechanism, we weight the context-attention for all Transformer decoder blocks. The loss function of this model is $L_\\mathrm {abs} = L_\\mathrm {sum}$."
],
[
"This model has the same structure as the SA. The loss function of this model is $L_\\mathrm {abs} = L_\\mathrm {sum} + L_\\mathrm {sal}$."
],
[
"This model first extracts the saliency sentences on the basis of a sentence-level saliency score $S_j$. $S_j$ is calculated by using the token level saliency score of the extractor, $S_l$, as",
"where $N_j$ and $X_j$ are the number of tokens and the set of tokens within the $j$-th sentence. Top $P$ sentences are extracted according to the sentence-level saliency score and then concatenated as one text $X_s$. These extracted sentences are then used as the input of the seq-to-seq model.",
"In the training, we extracted $X_s$, which maximizes the ROUGE-L scores with the reference summary text. In the test, we used the average number of sentences in $X_{s}$ in the training set as $P$. The loss function of the extractor is $L_\\mathrm {ext} = L_\\mathrm {sal}$, and that of the seq-to-seq model is $L_\\mathrm {abs} = L_\\mathrm {sum}$."
],
[
"We propose a new combination of the extractor and the seq-to-seq model, CIT, which can consider important tokens explicitly. Although the SE and SA models softly weight the representations of the source text or attention scores, they cannot select salient tokens explicitly. SEG explicitly extracts the salient sentences from the source text, but it cannot give token-level information to the seq-to-seq model, and it sometimes drops important information when extracting sentences. In contrast, CIT uses the tokens extracted according to saliency scores as an additional input of the seq-to-seq model. By adding token-level information, CIT can effectively guide the abstractive summary without dropping any important information.",
"Specifically, $K$ tokens $C = \\lbrace c_1, ..., c_K \\rbrace $ are extracted in descending order of the saliency score $S$. $S$ is obtained by inputting $X$ to the extractor. The order of $C$ retains the order of the source text $X$. A combined text $\\tilde{X} = \\mathrm {Concat}(C, X)$ is given to the seq-to-seq model as the input text. The loss function of the extractor is $L_\\mathrm {ext} = L_\\mathrm {sal}$, and that of the seq-to-seq model is $L_\\mathrm {abs} = L_\\mathrm {sum}$."
],
[
"This model combines the CIT and SE, so CIT uses an extractor for extracting important tokens, and SE is trained by using a shared encoder in the seq-to-seq model. The SE model is trained in an unsupervised way. The output $H^M_e \\in \\mathbb {R}^{L+K}$ of the shared encoder is weighted by saliency score $S \\in \\mathbb {R}^{L+K}$ with Eq. (DISPLAY_FORM29), where $S$ is estimated by using the output of the shared encoder with Eq. (DISPLAY_FORM16). The loss function of the extractor is $L_\\mathrm {ext} = L_\\mathrm {sal}$, and that of the seq-to-seq model is $L_\\mathrm {abs} = L_\\mathrm {sum}$."
],
[
"This model combines the CIT and SA, so we also train two saliency models. The SA model is trained in an unsupervised way, the same as the CIT + SE model. The attention score $a_i^t \\in \\mathbb {R}^{L+K}$ is weighted by $S \\in \\mathbb {R}^{L+K}$ with Eq. (DISPLAY_FORM32). The loss function of the extractor is $L_\\mathrm {ext} = L_\\mathrm {sal}$, and that of the seq-to-seq model is $L_\\mathrm {abs} = L_\\mathrm {sum}$."
],
[
"We used the CNN/DM dataset BIBREF5 and the XSum dataset BIBREF6, which are both standard datasets for news summarization. The details of the two datasets are listed in Table TABREF48. The CNN/DM is a highly extractive summarization dataset and the XSum is a highly abstractive summarization dataset."
],
[
"blackWe used BART$_{\\mathrm {LARGE}}$ BIBREF1, which is one of the state-of-the-art models, as the pre-trained seq-to-seq model and RoBERTa$_\\mathrm {BASE}$ BIBREF11 as the initial model of the extractor. In the extractor of CIT, stop words and duplicate tokens are ignored for the XSum dataset.",
"We used fairseq for the implementation of the seq-to-seq model. For fine-tuning of BART$_\\mathrm {LARGE}$ and the combination models, we used the same parameters as the official code. For fine-tuning of RoBERTa$_\\mathrm {BASE}$, we used Transformers. We set the learning rate to 0.00005 and the batch size to 32."
],
[
"We used ROUGE scores (F1), including ROUGE-1 (R-1), ROUGE-2 (R-2), and ROUGE-L (R-L), as the evaluation metrics BIBREF12. ROUGE scores were calculated using the files2rouge toolkit."
],
[
"Rouge scores of the combined models on the CNN/DM dataset are shown in Table TABREF51. We can see that all combined models outperformed the simple fine-tuned BART. This indicates that the saliency detection is effective in highly extractive datasets. One of the proposed models, CIT + SE, achieved the highest accuracy. The CIT model alone also outperformed other saliency models. This indicates that the CIT model effectively guides the abstractive summarization by combining explicitly extracted tokens."
],
[
"Rouge scores of the combined models on the XSum dataset are shown in Table TABREF52. The CIT model performed the best, although its improvement was smaller than on the CNN/DM dataset. Moreover, the accuracy of the MT, SE + MT, and SEG models decreased on the XSum dataset. These results were very different from those on the CNN/DM dataset.",
"One reason for the difference can be traced to the quality of the pseudo saliency labels. CNN/DM is a highly extractive dataset, so it is relatively easy to create token alignments for generating pseudo saliency labels, while in contrast, a summary in XSum is highly abstractive and short, which makes it difficult to create pseudo labels with high quality by simple token alignment. To improve the accuracy of summarization in this dataset, we have to improve the quality of the pseudo saliency labels and the accuracy of the saliency model."
],
[
"We analyzed the quality of the tokens extracted by the extractor in CIT. The results are summarized in Table TABREF55. On the CNN/DM dataset, the ROUGE-1 and ROUGE-2 scores of our extractor (Top-$K$ tokens) were higher than other models, while the ROUGE-L score was lower than the other sentence-based extraction method. This is because that our token-level extractor finds the important tokens whereas the seq-to-seq model learns how to generate a fluent summary incorporating these important tokens.",
"On the other hand, the extractive result on the XSum dataset was lower. For highly abstractive datasets, there is little overlap between the tokens. We need to consider how to make the high-quality pseudo saliency labels and how to evaluate the similarity of these two sequences."
],
[
"Our study focuses on the combinations of saliency models and the pre-trained seq-to-seq model. However, there are several studies that focus more on the pre-training strategy. We compared the CIT model with those models. Their ROUGE scores are shown in Tables TABREF57 and TABREF58. From Table TABREF57, we can see that our model outperformed the recent pre-trained models on the CNN/DM dataset. Even though PEGASUS$_\\mathrm {HugeNews}$ was pre-trained on the largest corpus comprised of news-like articles, the accuracy of abstractive summarization was not improved much. Our model improved the accuracy without any additional pre-training. This result indicates that it is more effective to combine saliency models with the seq-to-seq model for generating a highly extractive summary.",
"On the other hand, on the XSum dataset, PEGASUS$_\\mathrm {HugeNews}$ improved the ROUGE scores and achieved the best results. In the XSum dataset, summaries often include the expressions that are not written in the source text. Therefore, increasing the pre-training data and learning more patterns were effective. However, by improving the quality of the pseudo saliency labels, we should be able to improve the accuracy of the CIT model."
],
[
"BIBREF18 used BERT for their sentence-level extractive summarization model. BIBREF19 proposed a new pre-trained model that considers document-level information for sentence-level extractive summarization. Several researchers have published pre-trained encoder-decoder models very recently BIBREF20, BIBREF1, BIBREF2. BIBREF20 pre-trained a Transformer-based pointer-generator model. BIBREF1 pre-trained a standard Transformer-based encoder-decoder model using large unlabeled data and achieved state-of-the-art results. BIBREF8 and BIBREF16 extended the BERT structure to handle seq-to-seq tasks.",
"All the studies above focused on how to learn a universal pre-trained model; they did not consider the combination of pre-trained and saliency models for an abstractive summarization model."
],
[
"BIBREF4, BIBREF3, and BIBREF21 incorporated a sentence- and word-level extractive model in the pointer-generator model. Their models weight the copy probability for the source text by using an extractive model and guide the pointer-generator model to copy important words. BIBREF22 proposed a keyword guided abstractive summarization model. BIBREF23 proposed a sentence extraction and re-writing model that is trained in an end-to-end manner by using reinforcement learning. BIBREF24 proposed a search and rewrite model. BIBREF25 proposed a combination of sentence-level extraction and compression. None of these models are based on a pre-trained model. In contrast, our purpose is to clarify whether combined models are effective or not, and we are the first to investigate the combination of pre-trained seq-to-seq and saliency models. We compared a variety of combinations and clarified which combination is the most effective."
],
[
"This is the first study that has conducted extensive experiments to investigate the effectiveness of incorporating saliency models into the pre-trained seq-to-seq model. From the results, we found that saliency models were effective in finding important parts of the source text, even if the seq-to-seq model is pre-trained on large-scale corpora, especially for generating an highly extractive summary. We also proposed a new combination model, CIT, that outperformed simple fine-tuning and other combination models. Our combination model improved the summarization accuracy without any additional pre-training data and can be applied to any pre-trained model. While recent studies have been conducted to improve summarization accuracy by increasing the amount of pre-training data and developing new pre-training strategies, this study sheds light on the importance of saliency models in abstractive summarization."
]
],
"section_name": [
"Introduction",
"Task Definition",
"Pre-trained seq-to-seq Model",
"Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder",
"Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder ::: Encoder",
"Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder ::: Decoder",
"Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder ::: Multi-head Attention",
"Pre-trained seq-to-seq Model ::: Summary Loss Function",
"Saliency Models",
"Saliency Models ::: Basic Saliency Model",
"Saliency Models ::: Two Types of Saliency Model for Combination",
"Saliency Models ::: Two Types of Saliency Model for Combination ::: Shared encoder",
"Saliency Models ::: Two Types of Saliency Model for Combination ::: Extractor",
"Saliency Models ::: Pseudo Reference Label",
"Saliency Models ::: Saliency Loss Function",
"Combined Models",
"Combined Models ::: Combination types",
"Combined Models ::: Loss function",
"Combined Models ::: Using Shared Encoder to Combine the Saliency Model and the Seq-to-seq Model ::: Multi-Task (MT)",
"Combined Models ::: Using Shared Encoder to Combine the Saliency Model and the Seq-to-seq Model ::: Selective Encoding (SE)",
"Combined Models ::: Using Shared Encoder to Combine the Saliency Model and the Seq-to-seq Model ::: Combination of SE and MT",
"Combined Models ::: Using Shared Encoder to Combine the Saliency Model and the Seq-to-seq Model ::: Selective Attention (SA)",
"Combined Models ::: Using Shared Encoder to Combine the Saliency Model and the Seq-to-seq Model ::: Combination of SA and MT",
"Combined Models ::: Using the Extractor to Refine the Input Text ::: Sentence Extraction then Generation (SEG)",
"Combined Models ::: Proposed: Using Extractor to Extract an Additional Input Text ::: Conditional Summarization Model with Important Tokens",
"Combined Models ::: Proposed: Combination of Extractor and Shared Encoder ::: Combination of CIT and SE",
"Combined Models ::: Proposed: Combination of Extractor and Shared Encoder ::: Combination of CIT and SA",
"Experiments ::: Dataset",
"Experiments ::: Dataset ::: Model Configurations",
"Experiments ::: Evaluation Metrics",
"Experiments ::: Results ::: Do saliency models improve summarization accuracy in highly extractive datasets?",
"Experiments ::: Results ::: Do saliency models improve summarization accuracy in highly abstractive datasets?",
"Experiments ::: Results ::: How accurate are the outputs of the extractors?",
"Experiments ::: Results ::: Does the CIT model outperform other fine-tuned models?",
"Related Work and Discussion ::: Pre-trained Language Models for Abstractive Summarization",
"Related Work and Discussion ::: Abstractive Summarization with Saliency Models",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"932256f3201cec36f75b0f41f52700d421be92b9",
"de35b2cd76b574d19d1460c18e1aec72a2e98cbf"
],
"answer": [
{
"evidence": [
"BIBREF18 used BERT for their sentence-level extractive summarization model. BIBREF19 proposed a new pre-trained model that considers document-level information for sentence-level extractive summarization. Several researchers have published pre-trained encoder-decoder models very recently BIBREF20, BIBREF1, BIBREF2. BIBREF20 pre-trained a Transformer-based pointer-generator model. BIBREF1 pre-trained a standard Transformer-based encoder-decoder model using large unlabeled data and achieved state-of-the-art results. BIBREF8 and BIBREF16 extended the BERT structure to handle seq-to-seq tasks."
],
"extractive_spans": [
"Transformer-based encoder-decoder"
],
"free_form_answer": "",
"highlighted_evidence": [
"Transformer-based encoder-decoder model using large unlabeled data and achieved state-of-the-art results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"blackWe used BART$_{\\mathrm {LARGE}}$ BIBREF1, which is one of the state-of-the-art models, as the pre-trained seq-to-seq model and RoBERTa$_\\mathrm {BASE}$ BIBREF11 as the initial model of the extractor. In the extractor of CIT, stop words and duplicate tokens are ignored for the XSum dataset."
],
"extractive_spans": [],
"free_form_answer": "BART LARGE",
"highlighted_evidence": [
"blackWe used BART$_{\\mathrm {LARGE}}$ BIBREF1, which is one of the state-of-the-art models, as the pre-trained seq-to-seq model and RoBERTa$_\\mathrm {BASE}$ BIBREF11 as the initial model of the extractor. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7b64d6bdf5115b4ca7f4e2304d1c3369ab77943f",
"7c87fe59afb50661bf6d1fd625989be16d792043"
],
"answer": [
{
"evidence": [
"The decoder consists of $M$ layer decoder blocks. The inputs of the decoder are the output of the encoder $H_e^M$ and the output of the previous step of the decoder $\\lbrace y_1,...,y_{t-1} \\rbrace $. The output through the $M$ layer Transformer decoder blocks is defined as",
"In each step $t$, the $h_{dt}^M$ is projected to blackthe vocabulary space and the decoder outputs the highest probability token as the next token. The Transformer decoder block consists of a self-attention module, a context-attention module, and a two-layer feed-forward network."
],
"extractive_spans": [
"self-attention module, a context-attention module, and a two-layer feed-forward network"
],
"free_form_answer": "",
"highlighted_evidence": [
"The decoder consists of $M$ layer decoder blocks.",
"The Transformer decoder block consists of a self-attention module, a context-attention module, and a two-layer feed-forward network."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The decoder consists of $M$ layer decoder blocks. The inputs of the decoder are the output of the encoder $H_e^M$ and the output of the previous step of the decoder $\\lbrace y_1,...,y_{t-1} \\rbrace $. The output through the $M$ layer Transformer decoder blocks is defined as",
"In each step $t$, the $h_{dt}^M$ is projected to blackthe vocabulary space and the decoder outputs the highest probability token as the next token. The Transformer decoder block consists of a self-attention module, a context-attention module, and a two-layer feed-forward network."
],
"extractive_spans": [],
"free_form_answer": "M blocks, each consisting of self-attention module, context-attention module, and a two-layer feed-forward network.",
"highlighted_evidence": [
"The decoder consists of $M$ layer decoder blocks.",
"The Transformer decoder block consists of a self-attention module, a context-attention module, and a two-layer feed-forward network."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"f1e4deafda6cda0d097061b81faadf6d34afeda6",
"f28342c5f2ccc970f900563c2c54008bd9c07267"
],
"answer": [
{
"evidence": [
"The encoder consists of $M$ layer encoder blocks. The input of the encoder is $X = \\lbrace x_i, x_2, ... x_L \\rbrace $. The output through the $M$ layer encoder blocks is defined as",
"The encoder block consists of a self-attention module and a two-layer feed-forward network."
],
"extractive_spans": [],
"free_form_answer": "M blocks, each consisting of self-attention module and a two-layer feed-forward network.",
"highlighted_evidence": [
"The encoder consists of $M$ layer encoder blocks.",
"The encoder block consists of a self-attention module and a two-layer feed-forward network."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The encoder consists of $M$ layer encoder blocks. The input of the encoder is $X = \\lbrace x_i, x_2, ... x_L \\rbrace $. The output through the $M$ layer encoder blocks is defined as",
"The encoder block consists of a self-attention module and a two-layer feed-forward network."
],
"extractive_spans": [
"encoder block consists of a self-attention module and a two-layer feed-forward network"
],
"free_form_answer": "",
"highlighted_evidence": [
"The encoder consists of $M$ layer encoder blocks. The input of the encoder is $X = \\lbrace x_i, x_2, ... x_L \\rbrace $. The output through the $M$ layer encoder blocks is defined as\n\nThe encoder block consists of a self-attention module and a two-layer feed-forward network."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3dc94570cf70702bf263b42816c682d003ecce95",
"4f147f4927faed23028a5aecae44eb85cf5ce657"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"07c872cdec5f90d095137ac151d9af1648fc3aaf",
"7a937cb4efbbc7d1c7421d9054991e0952ba54d0"
],
"answer": [
{
"evidence": [
"A basic saliency model consists of $M$-layer Transformer encoder blocks ($\\mathrm {Encoder}_\\mathrm {sal}$) and a single-layer feed-forward network. We define the saliency score of the $l$-th token ($1 \\le l \\le L$) in the source text as",
"In this study, we use two types of saliency model for combination: a shared encoder and an extractor. Each model structure is based on the basic saliency model. We describe them below."
],
"extractive_spans": [
"basic saliency model consists of $M$-layer Transformer encoder blocks ($\\mathrm {Encoder}_\\mathrm {sal}$) and a single-layer feed-forward network"
],
"free_form_answer": "",
"highlighted_evidence": [
"A basic saliency model consists of $M$-layer Transformer encoder blocks ($\\mathrm {Encoder}_\\mathrm {sal}$) and a single-layer feed-forward network.",
"In this study, we use two types of saliency model for combination: a shared encoder and an extractor."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A basic saliency model consists of $M$-layer Transformer encoder blocks ($\\mathrm {Encoder}_\\mathrm {sal}$) and a single-layer feed-forward network. We define the saliency score of the $l$-th token ($1 \\le l \\le L$) in the source text as",
"The encoder block consists of a self-attention module and a two-layer feed-forward network."
],
"extractive_spans": [],
"free_form_answer": "M blocks, each consisting of a self-attention module and two-layer feed-forward network, combined with a single-layer feed-forward network.",
"highlighted_evidence": [
"A basic saliency model consists of $M$-layer Transformer encoder blocks ($\\mathrm {Encoder}_\\mathrm {sal}$) and a single-layer feed-forward network. ",
"The encoder block consists of a self-attention module and a two-layer feed-forward network."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the previous state-of-the-art?",
"What is the architecture of the decoder?",
"What is the architecture of the encoder?",
"What are the languages of the datasets?",
"What is the architecture of the saliency model?"
],
"question_id": [
"bd53399be8ff59060792da4c8e42a7fc1e6cbd85",
"a7313c29b154e84b571322532f5cab08e9d49e51",
"cfe21b979a6c851bdafb2e414622f61e62b1d98c",
"3e3d123960e40bcb1618e11999bd2031ccc1d155",
"2e37eb2a2a9ad80391e57acb53616eab048ab640"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Combinations of seq-to-seq and saliency models. Purple: Encoder. Blue: Decoder. Red: Shared encoder, which is a shared model for saliency detection and encoding, used in (a), (b), and (e). Yellow: Extractor, which is an independent saliency model to extract important (c) sentences Xs or (d), (e) tokens C from the source text X . Each of these colored blocks represents M -layer Transformer blocks. Gray: Linear transformation. Green: Context attention. Pink: Output trained in a supervised manner, where S is the saliency score and Y is the summary.",
"Table 1: Details of the datasets used in this paper.",
"Table 2: Results of BART and combined models on CNN/DM dataset. Five row-groups are the models described in §3, §5.1, §5.2, §5.3, and §5.4 in order from top to bottom.",
"Table 3: Results of BART and combined models on XSum dataset. The underlined result represents the best result among the models that outperformed our simple fine-tuning result.",
"Table 5: Results of state-of-the-art models and the proposed model on CNN/DM dataset. We also report the size of pre-training data and parameters utilized for each model. 1(Song et al., 2019); 2(Liu and Lapata, 2019); 3(Dong et al., 2019); 4(Raffel et al., 2019); 5(Lewis et al., 2019); 6(Zhang et al., 2019a) 7(Yan et al., 2020); 8(Xiao et al., 2020); 9(Bao et al., 2020)",
"Table 4: Results of saliency models on CNN/DM and XSum datasets. CIT extracted the top-K tokens or top3 sentences from the source text. 1(Liu and Lapata, 2019). The summaries in XSum are highly extractive, so the result of BertSumExt for XSum was not reported.",
"Table 6: Results of state-of-the-art models and the proposed model on XSum dataset. 1(Song et al., 2019); 2(Liu and Lapata, 2019); 3(Lewis et al., 2019); 4(Zhang et al., 2019a); 5(Bao et al., 2020)"
],
"file": [
"4-Figure1-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Table5-1.png",
"6-Table4-1.png",
"6-Table6-1.png"
]
} | [
"What is the previous state-of-the-art?",
"What is the architecture of the decoder?",
"What is the architecture of the encoder?",
"What is the architecture of the saliency model?"
] | [
[
"2003.13028-Experiments ::: Dataset ::: Model Configurations-0",
"2003.13028-Related Work and Discussion ::: Pre-trained Language Models for Abstractive Summarization-0"
],
[
"2003.13028-Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder ::: Decoder-0",
"2003.13028-Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder ::: Decoder-1"
],
[
"2003.13028-Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder ::: Encoder-1",
"2003.13028-Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder ::: Encoder-0"
],
[
"2003.13028-Saliency Models ::: Two Types of Saliency Model for Combination-0",
"2003.13028-Saliency Models ::: Basic Saliency Model-0",
"2003.13028-Pre-trained seq-to-seq Model ::: Transformer-based Encoder-Decoder ::: Encoder-1"
]
] | [
"BART LARGE",
"M blocks, each consisting of self-attention module, context-attention module, and a two-layer feed-forward network.",
"M blocks, each consisting of self-attention module and a two-layer feed-forward network.",
"M blocks, each consisting of a self-attention module and two-layer feed-forward network, combined with a single-layer feed-forward network."
] | 269 |
2003.03728 | Pseudo Labeling and Negative Feedback Learning for Large-scale Multi-label Domain Classification | In large-scale domain classification, an utterance can be handled by multiple domains with overlapped capabilities. However, only a limited number of ground-truth domains are provided for each training utterance in practice while knowing as many as correct target labels is helpful for improving the model performance. In this paper, given one ground-truth domain for each training utterance, we regard domains consistently predicted with the highest confidences as additional pseudo labels for the training. In order to reduce prediction errors due to incorrect pseudo labels, we leverage utterances with negative system responses to decrease the confidences of the incorrectly predicted domains. Evaluating on user utterances from an intelligent conversational system, we show that the proposed approach significantly improves the performance of domain classification with hypothesis reranking. | {
"paragraphs": [
[
"Domain classification is a task that predicts the most relevant domain given an input utterance BIBREF0. It is becoming more challenging since recent conversational interaction systems such as Amazon Alexa, Google Assistant, and Microsoft Cortana support more than thousands of domains developed by external developers BIBREF3, BIBREF2, BIBREF4. As they are independently and rapidly developed without a centralized ontology, multiple domains have overlapped capabilities that can process the same utterances. For example, “make an elephant sound” can be processed by AnimalSounds, AnimalNoises, and ZooKeeper domains.",
"Since there are a large number of domains, which are even frequently added or removed, it is infeasible to obtain all the ground-truth domains of the training utterances, and domain classifiers for conversational interaction systems are usually trained given only a small number (usually one) of ground-truths in the training utterances. This setting corresponds to multi-label positive and unlabeled (PU) learning, where assigned labels are positive, unassigned labels are not necessarily negative, and one or more labels are assigned for an instance BIBREF5, BIBREF6.",
"In this paper, we utilize user log data, which contain triples of an utterance, the predicted domain, and the response, for the model training. Therefore, we are given only one ground-truth for each training utterance. In order to improve the classification performance in this setting, if certain domains are repeatedly predicted with the highest confidences even though they are not the ground-truths of an utterance, we regard the domains as additional pseudo labels. This is closely related to pseudo labeling BIBREF7 or self-training BIBREF8, BIBREF9, BIBREF10. While the conventional pseudo labeling is used to derive target labels for unlabeled data, our approach adds pseudo labels to singly labeled data so that the data can have multiple target labels. Also, the approach is related to self-distillation, which leverages the confidence scores of the non-target outputs to improve the model performance BIBREF11, BIBREF12. While distillation methods utilize the confidence scores as the soft targets, pseudo labeling regards high confident outputs as the hard targets to further boost their confidences. We use both pseudo labeling and self-distillation in our work.",
"Pseudo labels can be wrongly derived when irrelevant domains are top predicted, which can lead the model training with wrong supervision. To mitigate this issue, we leverage utterances with negative system responses to lower the prediction confidences of the failing domains. For example, if a system response of a domain for an input utterance is “I don't know that one”, the domain is regarded as a negative ground-truth since it fails to handle the utterance.",
"Evaluating on an annotated dataset from the user logs of a large-scale conversation interaction system, we show that the proposed approach significantly improves the domain classification especially when hypothesis reranking is used BIBREF13, BIBREF4."
],
[
"We take a hypothesis reranking approach, which is widely used in large-scale domain classification for higher scalability BIBREF13, BIBREF4. Within the approach, a shortlister, which is a light-weighted domain classifier, suggests the most promising $k$ domains as the hypotheses. We train the shortlister along with the added pseudo labels, leveraging negative system responses, and self-distillation, which are described in Section SECREF3. Then a hypothesis reranker selects the final prediction from the $k$ hypotheses enriched with additional input features, which is described in Section SECREF4."
],
[
"Our shortlister architecture is shown in Figure FIGREF3. The words of an input utterance are represented as contextualized word vectors by bidirectional long short-term memory (BiLSTM) on top of the word embedding layer BIBREF14. Then, the concatenation of the last outputs of the forward LSTM and the backward LSTM is used to represent the utterance as a vector. Following BIBREF2 and BIBREF17, we leverage the domain enablement information through attention mechanism BIBREF18, where the weighted sum of enabled domain vectors followed by sigmoid activation is concatenated to the utterance vector for representing a personalized utterance. On top of the personalized utterance vector, a feed-forward neural network followed by sigmoid activation is used to obtain $n$-dimensional output vector $o$, where the prediction confidence of each domain is represented as a scalar value between 0 and 1.",
"Given an input utterance and its target label, binary cross entropy is used as the baseline loss function as follows:",
"",
"where $o$, $y$, and $n$ denote the model output vector, the one-hot vector of the target label, and the number of total labels. We describe other proposed loss functions in the following subsections."
],
[
"We hypothesize that the outputs repeatedly predicted with the highest confidences are indeed correct labels in many cases in multi-label PU learning setting. This approach is closely related to pseudo labeling BIBREF7 or self-training BIBREF8, BIBREF9, BIBREF10 in semi-supervised learning since our model is supervised with additional pseudo labels, but differs in that our approach assigns pseudo labels to singly labeled train sets rather than unlabeled data sets.",
"We derive the pseudo labels when the following conditions are met:",
"Maximally $p$ domains predicted with the highest confidences that are higher than the confidence of the known ground-truth.",
"Domains predicted with the highest confidences for $r$ times consecutively so that consistent top predictions are used as pseudo labels.",
"For the experiments in Section SECREF5, we use $p$=2 and $r$=4, which show the best dev set performance. Those derived pseudo labels are used in the model training as follows:",
"",
"where $\\tilde{y}$ denotes an $n$-hot vector such that the elements corresponding to the original ground-truth and the additional pseudo labels are set to 1."
],
[
"During the model training, irrelevant domains could be top predicted, and regarding them as additional target labels results in wrong confirmation bias BIBREF19, which causes incorrect model training. To reduce the side effect, we leverage utterances with negative responses in order to discourage the utterances' incorrect predictions. This setting can be considered as a multi-label variant of Positive, Unlabeled, and Biased Negative Data (PUbN) learning BIBREF20.",
"We obtain training utterances from log data, where utterances with positive system responses are used as the positive train set in Equation DISPLAY_FORM6 and DISPLAY_FORM10 while the utterances with negative responses are used as the negative train set in Equation DISPLAY_FORM14. For example, AnimalSounds is a (positive) ground-truth domain for “a monkey sound” because the system response to the utterance is “Here comes a monkey sound” while it is a negative ground-truth for “a dragon sound” as the response is “I don't know what sound a dragon makes”.",
"Previous work BIBREF21, BIBREF22 excludes such negative utterances from the training set. We find that it is more effective to explicitly demote the prediction confidences of the domains resulted in negative responses if they are top ranked. It is formulated as a loss function:",
"where $j$ denotes the index corresponding to the negative ground-truth domain. We demote the confidences of the negative ground-truths only when they are the highest so that the influence of using the negative ground-truths is not overwhelming."
],
[
"Knowledge distillation has been shown to improve the model performance by leveraging the prediction confidence scores from another model or from previous epochs BIBREF11, BIBREF12, BIBREF17. Inspired by BIBREF17, we utilize the model at the epoch showing the best dev set performance before the current epoch to obtain the prediction confidence scores as the soft target. The self-distillation in our work can be formulated as follows:",
"",
"where $\\tilde{o_i}$ denotes the model output at the epoch showing the best dev set performance so far. Before taking sigmoid to obtain $\\tilde{o_i}$, we use 16 as the temperature to increase the influence of distillation BIBREF11, which shows the best dev set performance following BIBREF17."
],
[
"The model is optimized with a combined loss function as follows:",
"",
"where $\\alpha ^t=1-0.95^t$ and $t$ is the current epoch so that the baseline loss is mainly used in the earlier epochs while the pseudo labels and self-distillation are more contributing in the later epochs following BIBREF23. $\\beta $ is a hyperparameter for utilizing negative ground-truths, which is set to 0.00025 showing the best dev set performance."
],
[
"Figure FIGREF20 shows the overall architecture of the hypothesis reranker that is similar to BIBREF4. First, we run intent classification and slot filling for the $k$ most confident domains from the shortlister outputs to obtain additional information for those domains BIBREF0. Then, we compose $k$ hypotheses, each of which is a vector consists of the shortlister confidence score, intent score, Viterbi score of slot-filling, domain vector, intent vector, and the summation of the slot vectors. On top of the $k$ hypothesis vectors, a BiLSTM is utilized for representing contextualized hypotheses and a shared feed-forward neural network is used to obtain final confidence score for each hypothesis. We set $k$=3 in our experiments following BIBREF4. We leverage the given ground-truth and the derived pseudo labels from the shortlister at the epoch showing the best dev set performance as target labels for training the reranker. We use hinge loss with margin 0.4 as the loss function.",
"One issue of the hypothesis reranking is that a training utterance cannot be used if no ground-truth exist in the top $k$ predictions of the shortlister. This is problematic in the multi-label PU setting since correct domains can indeed exist in the top $k$ list but unknown, which makes the training utterance less useful in the reranking. Our pseudo labeling method can address this issue. If correct pseudo labels are derived from the shortlister's top predictions for such utterances, we can use them properly in the reranker training, which was unavailable without them. This allows our approach make more improvement in hypothesis reranking than shortlisting."
],
[
"In this section, we show training and evaluation sets, and experiment results."
],
[
"We utilize utterances with explicit invocation patterns from an intelligent conversational system for the model training similarly to BIBREF4 and BIBREF17. For example, given “ask {AmbientSounds} to {play thunderstorm sound}”, we extract “play thunderstorm” as the input utterance and Ambient",
"Sounds as the ground-truth. One difference from the previous work is that we utilize utterances with positive system responses as the positive train set and the dev set, and use those with the negative responses as the negative train set as described in Section SECREF11. We have extracted 3M positive train, 400K negative train, and 600K dev sets from 4M log data with 2,500 most frequent domains as the ground-truths. Pseudo labels are added to 53K out of 3M in the positive train set as described in Section SECREF7.",
"For the evaluation, we have extracted 10K random utterances from the user log data and independent annotators labeled the top three predictions of all the evaluated models for each utterance so that we can correctly compute nDCG at rank position 3."
],
[
"Table TABREF21 shows the evaluation results of the shortlister and the hypothesis reranker with the proposed approaches. For the shortlisters, we show nDCG$_3$ scores, which are highly correlated with the F1 scores of the rerankers than other metrics since the second and third top shortlister predictions contribute the metric. We find that just using the pseudo labels as the additional targets degrades the performance (2). However, when both the pseudo labels and the negative ground-truths are utilized, we observe significant improvements for both precision and recall (5). In addition, recall is increased when self-distillation is used, which achieves the best F1 score (6). Each of utilizing the negative feedback $((1)\\rightarrow (3) \\;\\text{and}\\; (2)\\rightarrow (5))$ and then additional pseudo labels $((3)\\rightarrow (5) \\;\\text{and}\\; (4)\\rightarrow (6))$ show statistically significant improvements with McNemar test for p=0.05 for the final reranker results.",
"Using self-distillation $((3)\\rightarrow (4) \\;\\text{and}\\; (5)\\rightarrow (6))$ shows increased F-1 score by increasing recall and decreasing precision, but the improvements are not significant. One issue is that pseudo labeling and self-distillation are contrary since the former encourages entropy minimization BIBREF25, BIBREF7 while the latter can increase entropy by soft targeting the non-target labels. More investigation of self-distillation along with the proposed pseudo labeling would be future work.",
"Table TABREF22 shows examples of derived pseudo labels from model (6). It demonstrates that the domains capable of processing the utterances can be derived, which helps more correct model training."
],
[
"We have proposed deriving pseudo labels along with leveraging utterances with negative system responses and self-distillation to improve the performance of domain classification when multiple domains are ground-truths even if only one ground-truth is known in large-scale domain classification. Evaluating on the test utterances with multiple ground-truths from an intelligent conversational system, we have showed that the proposed approach significantly improves the performance of domain classification with hypothesis reranking.",
"As future work, combining our approach with pure semi-supervised learning, and the relation between pseudo labeling and distillation should be further studied."
]
],
"section_name": [
"Introduction",
"Model Overview",
"Shortlister Model",
"Shortlister Model ::: Deriving Pseudo Labels",
"Shortlister Model ::: Leveraging Negative Feedback",
"Shortlister Model ::: Self-distillation",
"Shortlister Model ::: Combined Loss",
"Hypothesis Reranking Model",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Experiment Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"09249c88a69847855726770c01d58372712fb1e7",
"1f83560bb111463ad84d73019f245e6d0c9db6d7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1. Evaluation results on various metrics (%). pseudo, neg feed, and self dist denote using derived pseudo labels, negative feedback, and self-distillation, respectively."
],
"extractive_spans": [],
"free_form_answer": "F-1 score was improved by 1.19 percent points.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1. Evaluation results on various metrics (%). pseudo, neg feed, and self dist denote using derived pseudo labels, negative feedback, and self-distillation, respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF21 shows the evaluation results of the shortlister and the hypothesis reranker with the proposed approaches. For the shortlisters, we show nDCG$_3$ scores, which are highly correlated with the F1 scores of the rerankers than other metrics since the second and third top shortlister predictions contribute the metric. We find that just using the pseudo labels as the additional targets degrades the performance (2). However, when both the pseudo labels and the negative ground-truths are utilized, we observe significant improvements for both precision and recall (5). In addition, recall is increased when self-distillation is used, which achieves the best F1 score (6). Each of utilizing the negative feedback $((1)\\rightarrow (3) \\;\\text{and}\\; (2)\\rightarrow (5))$ and then additional pseudo labels $((3)\\rightarrow (5) \\;\\text{and}\\; (4)\\rightarrow (6))$ show statistically significant improvements with McNemar test for p=0.05 for the final reranker results.",
"FLOAT SELECTED: Table 1. Evaluation results on various metrics (%). pseudo, neg feed, and self dist denote using derived pseudo labels, negative feedback, and self-distillation, respectively."
],
"extractive_spans": [],
"free_form_answer": "F1 is improved from 80.15 to 80.50 and from 80.71 to 81.69 of Shortlister and Hipothesis Reranker models respectively.",
"highlighted_evidence": [
"Table TABREF21 shows the evaluation results of the shortlister and the hypothesis reranker with the proposed approaches.",
"FLOAT SELECTED: Table 1. Evaluation results on various metrics (%). pseudo, neg feed, and self dist denote using derived pseudo labels, negative feedback, and self-distillation, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4689ea8c46a95f5c7cae052b1b90dbdd125b92b3",
"bd2d097b668db125c7ca7bf6482f59c9e358d053"
],
"answer": [
{
"evidence": [
"For the evaluation, we have extracted 10K random utterances from the user log data and independent annotators labeled the top three predictions of all the evaluated models for each utterance so that we can correctly compute nDCG at rank position 3."
],
"extractive_spans": [
"10K random utterances from the user log data"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the evaluation, we have extracted 10K random utterances from the user log data and independent annotators labeled the top three predictions of all the evaluated models for each utterance so that we can correctly compute nDCG at rank position 3."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We utilize utterances with explicit invocation patterns from an intelligent conversational system for the model training similarly to BIBREF4 and BIBREF17. For example, given “ask {AmbientSounds} to {play thunderstorm sound}”, we extract “play thunderstorm” as the input utterance and Ambient",
"For the evaluation, we have extracted 10K random utterances from the user log data and independent annotators labeled the top three predictions of all the evaluated models for each utterance so that we can correctly compute nDCG at rank position 3."
],
"extractive_spans": [],
"free_form_answer": "The dataset was created by extracting utterances from the user log data from an intelligent conversational system.",
"highlighted_evidence": [
"We utilize utterances with explicit invocation patterns from an intelligent conversational system for the model training similarly to BIBREF4 and BIBREF17. ",
"For the evaluation, we have extracted 10K random utterances from the user log data and independent annotators labeled the top three predictions of all the evaluated models for each utterance so that we can correctly compute nDCG at rank position 3."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0842773a226b4acf7dcf1bf0f3aeff502d853757",
"54eee592a2c170f84296cac5669192424b76129b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ceb6304bb650e17e71f99b83b1ebe3cbb1dcc3a0",
"fc9e960bdeaf6e95e432900e662d53cbca64711a"
],
"answer": [
{
"evidence": [
"Previous work BIBREF21, BIBREF22 excludes such negative utterances from the training set. We find that it is more effective to explicitly demote the prediction confidences of the domains resulted in negative responses if they are top ranked. It is formulated as a loss function:",
"where $j$ denotes the index corresponding to the negative ground-truth domain. We demote the confidences of the negative ground-truths only when they are the highest so that the influence of using the negative ground-truths is not overwhelming."
],
"extractive_spans": [],
"free_form_answer": "The confidence of the incorrectly predicted domain is decreased only when it is highest among all predictions.",
"highlighted_evidence": [
"We find that it is more effective to explicitly demote the prediction confidences of the domains resulted in negative responses if they are top ranked. It is formulated as a loss function:\n\nwhere $j$ denotes the index corresponding to the negative ground-truth domain. We demote the confidences of the negative ground-truths only when they are the highest so that the influence of using the negative ground-truths is not overwhelming."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Previous work BIBREF21, BIBREF22 excludes such negative utterances from the training set. We find that it is more effective to explicitly demote the prediction confidences of the domains resulted in negative responses if they are top ranked. It is formulated as a loss function:",
"where $j$ denotes the index corresponding to the negative ground-truth domain. We demote the confidences of the negative ground-truths only when they are the highest so that the influence of using the negative ground-truths is not overwhelming."
],
"extractive_spans": [
"demote the confidences of the negative ground-truths only when they are the highest so that the influence of using the negative ground-truths is not overwhelming"
],
"free_form_answer": "",
"highlighted_evidence": [
"We find that it is more effective to explicitly demote the prediction confidences of the domains resulted in negative responses if they are top ranked.",
"We demote the confidences of the negative ground-truths only when they are the highest so that the influence of using the negative ground-truths is not overwhelming."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"By how much do they improve on domain classification?",
"Which dataset do they evaluate on?",
"How does their approach work for domains with few overlapping utterances? ",
"How do they decide by how much to decrease confidences of incorrectly predicted domains?"
],
"question_id": [
"049415676f8323f4af16d349f36fbcaafd7367ae",
"fee498457774d9617068890ff29528e9fa05a2ac",
"c626637ed14dee3049b87171ddf326115e59d9ee",
"b160bfb341f24ae42a268aa18641237a4b3a6457"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Shortlister architecture: an input utterance is represented as a concatenation of the utterance vector from BiLSTM and the weighted sum of domain enablement vectors through domain enablement attention mechanism. Then, a feed-forward neural network followed by sigmoid activation represents the n-dimensional output vector.",
"Fig. 2. Hypothesis Reranker Architecture: Each hypothesis consists of scores and vectors of domain, intent, and slots. Then, BiLSTM and a feed-forward neural network are used to represent contextualized hypothesis confidence scores.",
"Table 1. Evaluation results on various metrics (%). pseudo, neg feed, and self dist denote using derived pseudo labels, negative feedback, and self-distillation, respectively.",
"Table 2. Examples of additional pseudo labels."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"By how much do they improve on domain classification?",
"Which dataset do they evaluate on?",
"How do they decide by how much to decrease confidences of incorrectly predicted domains?"
] | [
[
"2003.03728-Experiments ::: Experiment Results-0",
"2003.03728-4-Table1-1.png"
],
[
"2003.03728-Experiments ::: Datasets-0",
"2003.03728-Experiments ::: Datasets-2"
],
[
"2003.03728-Shortlister Model ::: Leveraging Negative Feedback-3",
"2003.03728-Shortlister Model ::: Leveraging Negative Feedback-2"
]
] | [
"F1 is improved from 80.15 to 80.50 and from 80.71 to 81.69 of Shortlister and Hipothesis Reranker models respectively.",
"The dataset was created by extracting utterances from the user log data from an intelligent conversational system.",
"The confidence of the incorrectly predicted domain is decreased only when it is highest among all predictions."
] | 272 |
1909.01860 | Visual Question Answering using Deep Learning: A Survey and Performance Analysis | The Visual Question Answering (VQA) task combines challenges for processing data with both Visual and Linguistic processing, to answer basic `common sense' questions about given images. Given an image and a question in natural language, the VQA system tries to find the correct answer to it using visual elements of the image and inference gathered from textual questions. In this survey, we cover and discuss the recent datasets released in the VQA domain dealing with various types of question-formats and enabling robustness of the machine-learning models. Next, we discuss about new deep learning models that have shown promising results over the VQA datasets. At the end, we present and discuss some of the results computed by us over the vanilla VQA models, Stacked Attention Network and the VQA Challenge 2017 winner model. We also provide the detailed analysis along with the challenges and future research directions. | {
"paragraphs": [
[
"Visual Question Answering (VQA) refers to a challenging task which lies at the intersection of image understanding and language processing. The VQA task has witnessed a significant progress in the recent years by the machine intelligence community. The aim of VQA is to develop a system to answer specific questions about an input image. The answer could be in any of the following forms: a word, a phrase, binary answer, multiple choice answer, or a fill in the blank answer. Agarwal et al. BIBREF0 presented a novel way of combining computer vision and natural language processing concepts of to achieve Visual Grounded Dialogue, a system mimicking the human understanding of the environment with the use of visual observation and language understanding.",
"The advancements in the field of deep learning have certainly helped to develop systems for the task of Image Question Answering. Krizhevsky et al BIBREF1 proposed the AlexNet model, which created a revolution in the computer vision domain. The paper introduced the concept of Convolution Neural Networks (CNN) to the mainstream computer vision application. Later many authors have worked on CNN, which has resulted in robust, deep learning models like VGGNet BIBREF2, Inception BIBREF3, ResNet BIBREF4, and etc. Similarly, the recent advancements in natural language processing area based on deep learning have improved the text understanding prforance as well. The first major algorithm in the context of text processing is considered to be the Recurrent Neural Networks (RNN) BIBREF5 which introduced the concept of prior context for time series based data. This architecture helped the growth of machine text understanding which gave new boundaries to machine translation, text classification and contextual understanding. Another major breakthrough in the domain was the introduction of Long-Short Term Memory (LSTM) architecture BIBREF6 which improvised over the RNN by introducing a context cell which stores the prior relevant information.",
"The vanilla VQA model BIBREF0 used a combination of VGGNet BIBREF2 and LSTM BIBREF6. This model has been revised over the years, employing newer architectures and mathematical formulations. Along with this, many authors have worked on producing datasets for eliminating bias, strengthening the performance of the model by robust question-answer pairs which try to cover the various types of questions, testing the visual and language understanding of the system. In this survey, first we cover major datasets published for validating the Visual Question Answering task, such as VQA dataset BIBREF0, DAQUAR BIBREF7, Visual7W BIBREF8 and most recent datasets up to 2019 include Tally-QA BIBREF9 and KVQA BIBREF10. Next, we discuss the state-of-the-art architectures designed for the task of Visual Question Answering such as Vanilla VQA BIBREF0, Stacked Attention Networks BIBREF11 and Pythia v1.0 BIBREF12. Next we present some of our computed results over the three architectures: vanilla VQA model BIBREF0, Stacked Attention Network (SAN) BIBREF11 and Teney et al. model BIBREF13. Finally, we discuss the observations and future directions."
],
[
"The major VQA datasets are summarized in Table TABREF2. We present the datasets below.",
"DAQUAR: DAQUAR stands for Dataset for Question Answering on Real World Images, released by Malinowski et al. BIBREF7. It is the first dataset released for the IQA task. The images are taken from NYU-Depth V2 dataset BIBREF17. The dataset is small with a total of 1449 images. The question bank includes 12468 question-answer pairs with 2483 unique questions. The questions have been generated by human annotations and confined within 9 question templates using annotations of the NYU-Depth dataset.",
"VQA Dataset: The Visual Question Answering (VQA) dataset BIBREF0 is one of the largest datasets collected from the MS-COCO BIBREF18 dataset. The VQA dataset contains at least 3 questions per image with 10 answers per question. The dataset contains 614,163 questions in the form of open-ended and multiple choice. In multiple choice questions, the answers can be classified as: 1) Correct Answer, 2) Plausible Answer, 3) Popular Answers and 4) Random Answers. Recently, VQA V2 dataset BIBREF0 is released with additional confusing images. The VQA sample images and questions are shown in Fig. SECREF2 in 1st row and 1st column.",
"Visual Madlibs: The Visual Madlibs dataset BIBREF15 presents a different form of template for the Image Question Answering task. One of the forms is the fill in the blanks type, where the system needs to supplement the words to complete the sentence and it mostly targets people, objects, appearances, activities and interactions. The Visual Madlibs samples are shown in Fig. SECREF2 in 1st row and 2nd column.",
"Visual7W: The Visual7W dataset BIBREF8 is also based on the MS-COCO dataset. It contains 47,300 COCO images with 327,939 question-answer pairs. The dataset also consists of 1,311,756 multiple choice questions and answers with 561,459 groundings. The dataset mainly deals with seven forms of questions (from where it derives its name): What, Where, When, Who, Why, How, and Which. It is majorly formed by two types of questions. The ‘telling’ questions are the ones which are text-based, giving a sort of description. The ‘pointing’ questions are the ones that begin with ‘Which,’ and have to be correctly identified by the bounding boxes among the group of plausible answers.",
"CLEVR: CLEVR BIBREF16 is a synthetic dataset to test the visual understanding of the VQA systems. The dataset is generated using three objects in each image, namely cylinder, sphere and cube. These objects are in two different sizes, two different materials and placed in eight different colors. The questions are also synthetically generated based on the objects placed in the image. The dataset also accompanies the ground-truth bounding boxes for each object in the image.",
"Tally-QA: Very recently, in 2019, the Tally-QA BIBREF9 dataset is proposed which is the largest dataset of object counting in the open-ended task. The dataset includes both simple and complex question types which can be seen in Fig. SECREF2. The dataset is quite large in numbers as well as it is 2.5 times the VQA dataset. The dataset contains 287,907 questions, 165,000 images and 19,000 complex questions. The Tally-QA samples are shown in Fig. SECREF2 in 2nd row and 1st column.",
"KVQA: The recent interest in common-sense questions has led to the development of world Knowledge based VQA dataset BIBREF10. The dataset contains questions targeting various categories of nouns and also require world knowledge to arrive at a solution. Questions in this dataset require multi-entity, multi-relation, and multi- hop reasoning over large Knowledge Graphs (KG) to arrive at an answer. The dataset contains 24,000 images with 183,100 question-answer pairs employing around 18K proper nouns. The KVQA samples are shown in Fig. SECREF2 in 2nd row and 2nd column."
],
[
"The emergence of deep-learning architectures have led to the development of the VQA systems. We discuss the state-of-the-art methods with an overview in Table TABREF6.",
"Vanilla VQA BIBREF0: Considered as a benchmark for deep learning methods, the vanilla VQA model uses CNN for feature extraction and LSTM or Recurrent networks for language processing. These features are combined using element-wise operations to a common feature, which is used to classify to one of the answers as shown in Fig. FIGREF4.",
"Stacked Attention Networks BIBREF11: This model introduced the attention using the softmax output of the intermediate question feature. The attention between the features are stacked which helps the model to focus on the important portion of the image.",
"Teney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures.",
"Neural-Symbolic VQA BIBREF23: Specifically made for CLEVR dataset, this model leverages the question formation and image generation strategy of CLEVR. The images are converted to structured features and the question features are converted to their original root question strategy. This feature is used to filter out the required answer.",
"Focal Visual Text Attention (FVTA) BIBREF24: This model combines the sequence of image features generated by the network, text features of the image (or probable answers) and the question. It applies the attention based on the both text components, and finally classifies the features to answer the question. This model is better suited for the VQA in videos which has more use cases than images.",
"Pythia v1.0 BIBREF27: Pythia v1.0 is the award winning architecture for VQA Challenge 2018. The architecture is similar to Teney et al. BIBREF13 with reduced computations with element-wise multiplication, use of GloVe vectors BIBREF22, and ensemble of 30 models.",
"Differential Networks BIBREF19: This model uses the differences between forward propagation steps to reduce the noise and to learn the interdependency between features. Image features are extracted using Faster-RCNN BIBREF21. The differential modules BIBREF29 are used to refine the features in both text and images. GRU BIBREF30 is used for question feature extraction. Finally, it is combined with an attention module to classify the answers. The Differential Networks architecture is illustrated in Fig. FIGREF5."
],
[
"The reported results for different methods over different datasets are summarized in Table TABREF2 and Table TABREF6. It can be observed that VQA dataset is very commonly used by different methods to test the performance. Other datasets like Visual7W, Tally-QA and KVQA are also very challenging and recent datasets. It can be also seen that the Pythia v1.0 is one of the recent methods performing very well over VQA dataset. The Differentail Network is the very recent method proposed for VQA task and shows very promising performance over different datasets.",
"As part of this survey, we also implemented different methods over different datasets and performed the experiments. We considered the following three models for our experiments, 1) the baseline Vanilla VQA model BIBREF0 which uses the VGG16 CNN architecture BIBREF2 and LSTMs BIBREF6, 2) the Stacked Attention Networks BIBREF11 architecture, and 3) the 2017 VQA challenge winner Teney et al. model BIBREF13. We considered the widely adapted datasets such as standard VQA dataset BIBREF0 and Visual7W dataset BIBREF8 for the experiments. We used the Adam Optimizer for all models with Cross-Entropy loss function. Each model is trained for 100 epochs for each dataset.",
"The experimental results are presented in Table TABREF7 in terms of the accuracy for three models over two datasets. In the experiments, we found that the Teney et al. BIBREF13 is the best performing model on both VQA and Visual7W Dataset. The accuracies obtained over the Teney et al. model are 67.23% and 65.82% over VQA and Visual7W datasets for the open-ended question-answering task, respectively. The above results re-affirmed that the Teney et al. model is the best performing model till 2018 which has been pushed by Pythia v1.0 BIBREF12, recently, where they have utilized the same model with more layers to boost the performance."
],
[
"The Visual Question Answering has recently witnessed a great interest and development by the group of researchers and scientists from all around the world. The recent trends are observed in the area of developing more and more real life looking datasets by incorporating the real world type questions and answers. The recent trends are also seen in the area of development of sophisticated deep learning models by better utilizing the visual cues as well as textual cues by different means. The performance of even best model is still lagging and around 60-70% only. Thus, it is still an open problem to develop better deep learning models as well as more challenging datasets for VQA. Different strategies like object level details, segmentation masks, deeper models, sentiment of the question, etc. can be considered to develop the next generation VQA models."
]
],
"section_name": [
"Introduction",
"Datasets",
"Deep Learning Based VQA Methods",
"Experimental Results and Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"38ff2189dbe0711fada9bab22ed6a5acdad029d3",
"fca6eed4bddf4a4076964422b2f927e3f34cb517"
],
"answer": [
{
"evidence": [
"The Visual Question Answering has recently witnessed a great interest and development by the group of researchers and scientists from all around the world. The recent trends are observed in the area of developing more and more real life looking datasets by incorporating the real world type questions and answers. The recent trends are also seen in the area of development of sophisticated deep learning models by better utilizing the visual cues as well as textual cues by different means. The performance of even best model is still lagging and around 60-70% only. Thus, it is still an open problem to develop better deep learning models as well as more challenging datasets for VQA. Different strategies like object level details, segmentation masks, deeper models, sentiment of the question, etc. can be considered to develop the next generation VQA models."
],
"extractive_spans": [
"develop better deep learning models",
" more challenging datasets for VQA"
],
"free_form_answer": "",
"highlighted_evidence": [
" The performance of even best model is still lagging and around 60-70% only. Thus, it is still an open problem to develop better deep learning models as well as more challenging datasets for VQA. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Visual Question Answering has recently witnessed a great interest and development by the group of researchers and scientists from all around the world. The recent trends are observed in the area of developing more and more real life looking datasets by incorporating the real world type questions and answers. The recent trends are also seen in the area of development of sophisticated deep learning models by better utilizing the visual cues as well as textual cues by different means. The performance of even best model is still lagging and around 60-70% only. Thus, it is still an open problem to develop better deep learning models as well as more challenging datasets for VQA. Different strategies like object level details, segmentation masks, deeper models, sentiment of the question, etc. can be considered to develop the next generation VQA models."
],
"extractive_spans": [],
"free_form_answer": " object level details, segmentation masks, and sentiment of the question",
"highlighted_evidence": [
"The performance of even best model is still lagging and around 60-70% only. Thus, it is still an open problem to develop better deep learning models as well as more challenging datasets for VQA. Different strategies like object level details, segmentation masks, deeper models, sentiment of the question, etc. can be considered to develop the next generation VQA models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"a6c40e8f4fa717904a657e7de638e579a1efd138",
"b1132b472b402d3b15ced850d61b16fae58785fb"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"85406c67492a25f043e9db2304f7123753f30785",
"c7c14a71a11e7d6e970f415dd7edbd33e040dc1b"
],
"answer": [
{
"evidence": [
"The emergence of deep-learning architectures have led to the development of the VQA systems. We discuss the state-of-the-art methods with an overview in Table TABREF6.",
"Vanilla VQA BIBREF0: Considered as a benchmark for deep learning methods, the vanilla VQA model uses CNN for feature extraction and LSTM or Recurrent networks for language processing. These features are combined using element-wise operations to a common feature, which is used to classify to one of the answers as shown in Fig. FIGREF4.",
"Stacked Attention Networks BIBREF11: This model introduced the attention using the softmax output of the intermediate question feature. The attention between the features are stacked which helps the model to focus on the important portion of the image.",
"Teney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures.",
"Neural-Symbolic VQA BIBREF23: Specifically made for CLEVR dataset, this model leverages the question formation and image generation strategy of CLEVR. The images are converted to structured features and the question features are converted to their original root question strategy. This feature is used to filter out the required answer.",
"Focal Visual Text Attention (FVTA) BIBREF24: This model combines the sequence of image features generated by the network, text features of the image (or probable answers) and the question. It applies the attention based on the both text components, and finally classifies the features to answer the question. This model is better suited for the VQA in videos which has more use cases than images.",
"Pythia v1.0 BIBREF27: Pythia v1.0 is the award winning architecture for VQA Challenge 2018. The architecture is similar to Teney et al. BIBREF13 with reduced computations with element-wise multiplication, use of GloVe vectors BIBREF22, and ensemble of 30 models.",
"Differential Networks BIBREF19: This model uses the differences between forward propagation steps to reduce the noise and to learn the interdependency between features. Image features are extracted using Faster-RCNN BIBREF21. The differential modules BIBREF29 are used to refine the features in both text and images. GRU BIBREF30 is used for question feature extraction. Finally, it is combined with an attention module to classify the answers. The Differential Networks architecture is illustrated in Fig. FIGREF5."
],
"extractive_spans": [
"Vanilla VQA",
"Stacked Attention Networks",
"Teney et al. Model",
"Neural-Symbolic VQA",
"Focal Visual Text Attention (FVTA)",
"Pythia v1.0",
"Differential Networks"
],
"free_form_answer": "",
"highlighted_evidence": [
"The emergence of deep-learning architectures have led to the development of the VQA systems. We discuss the state-of-the-art methods with an overview in Table TABREF6.\n\nVanilla VQA BIBREF0: Considered as a benchmark for deep learning methods, the vanilla VQA model uses CNN for feature extraction and LSTM or Recurrent networks for language processing. These features are combined using element-wise operations to a common feature, which is used to classify to one of the answers as shown in Fig. FIGREF4.\n\nStacked Attention Networks BIBREF11: This model introduced the attention using the softmax output of the intermediate question feature. The attention between the features are stacked which helps the model to focus on the important portion of the image.\n\nTeney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures.\n\nNeural-Symbolic VQA BIBREF23: Specifically made for CLEVR dataset, this model leverages the question formation and image generation strategy of CLEVR. The images are converted to structured features and the question features are converted to their original root question strategy. This feature is used to filter out the required answer.\n\nFocal Visual Text Attention (FVTA) BIBREF24: This model combines the sequence of image features generated by the network, text features of the image (or probable answers) and the question. It applies the attention based on the both text components, and finally classifies the features to answer the question. This model is better suited for the VQA in videos which has more use cases than images.\n\nPythia v1.0 BIBREF27: Pythia v1.0 is the award winning architecture for VQA Challenge 2018. The architecture is similar to Teney et al. BIBREF13 with reduced computations with element-wise multiplication, use of GloVe vectors BIBREF22, and ensemble of 30 models.\n\nDifferential Networks BIBREF19: This model uses the differences between forward propagation steps to reduce the noise and to learn the interdependency between features. Image features are extracted using Faster-RCNN BIBREF21. The differential modules BIBREF29 are used to refine the features in both text and images. GRU BIBREF30 is used for question feature extraction. Finally, it is combined with an attention module to classify the answers. The Differential Networks architecture is illustrated in Fig. FIGREF5."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Deep Learning Based VQA Methods",
"The emergence of deep-learning architectures have led to the development of the VQA systems. We discuss the state-of-the-art methods with an overview in Table TABREF6.",
"Vanilla VQA BIBREF0: Considered as a benchmark for deep learning methods, the vanilla VQA model uses CNN for feature extraction and LSTM or Recurrent networks for language processing. These features are combined using element-wise operations to a common feature, which is used to classify to one of the answers as shown in Fig. FIGREF4.",
"Stacked Attention Networks BIBREF11: This model introduced the attention using the softmax output of the intermediate question feature. The attention between the features are stacked which helps the model to focus on the important portion of the image.",
"Teney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures.",
"Neural-Symbolic VQA BIBREF23: Specifically made for CLEVR dataset, this model leverages the question formation and image generation strategy of CLEVR. The images are converted to structured features and the question features are converted to their original root question strategy. This feature is used to filter out the required answer.",
"Focal Visual Text Attention (FVTA) BIBREF24: This model combines the sequence of image features generated by the network, text features of the image (or probable answers) and the question. It applies the attention based on the both text components, and finally classifies the features to answer the question. This model is better suited for the VQA in videos which has more use cases than images.",
"Pythia v1.0 BIBREF27: Pythia v1.0 is the award winning architecture for VQA Challenge 2018. The architecture is similar to Teney et al. BIBREF13 with reduced computations with element-wise multiplication, use of GloVe vectors BIBREF22, and ensemble of 30 models.",
"Differential Networks BIBREF19: This model uses the differences between forward propagation steps to reduce the noise and to learn the interdependency between features. Image features are extracted using Faster-RCNN BIBREF21. The differential modules BIBREF29 are used to refine the features in both text and images. GRU BIBREF30 is used for question feature extraction. Finally, it is combined with an attention module to classify the answers. The Differential Networks architecture is illustrated in Fig. FIGREF5."
],
"extractive_spans": [
"Stacked Attention Networks BIBREF11",
"Teney et al. Model BIBREF13",
"Neural-Symbolic VQA BIBREF23",
"Focal Visual Text Attention (FVTA) BIBREF24",
"Pythia v1.0 BIBREF27",
"Differential Networks BIBREF19:"
],
"free_form_answer": "",
"highlighted_evidence": [
"Deep Learning Based VQA Methods\nThe emergence of deep-learning architectures have led to the development of the VQA systems. We discuss the state-of-the-art methods with an overview in Table TABREF6.\n\nVanilla VQA BIBREF0: Considered as a benchmark for deep learning methods, the vanilla VQA model uses CNN for feature extraction and LSTM or Recurrent networks for language processing. These features are combined using element-wise operations to a common feature, which is used to classify to one of the answers as shown in Fig. FIGREF4.\n\nStacked Attention Networks BIBREF11: This model introduced the attention using the softmax output of the intermediate question feature. The attention between the features are stacked which helps the model to focus on the important portion of the image.\n\nTeney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures.\n\nNeural-Symbolic VQA BIBREF23: Specifically made for CLEVR dataset, this model leverages the question formation and image generation strategy of CLEVR. The images are converted to structured features and the question features are converted to their original root question strategy. This feature is used to filter out the required answer.\n\nFocal Visual Text Attention (FVTA) BIBREF24: This model combines the sequence of image features generated by the network, text features of the image (or probable answers) and the question. It applies the attention based on the both text components, and finally classifies the features to answer the question. This model is better suited for the VQA in videos which has more use cases than images.\n\nPythia v1.0 BIBREF27: Pythia v1.0 is the award winning architecture for VQA Challenge 2018. The architecture is similar to Teney et al. BIBREF13 with reduced computations with element-wise multiplication, use of GloVe vectors BIBREF22, and ensemble of 30 models.\n\nDifferential Networks BIBREF19: This model uses the differences between forward propagation steps to reduce the noise and to learn the interdependency between features. Image features are extracted using Faster-RCNN BIBREF21. The differential modules BIBREF29 are used to refine the features in both text and images. GRU BIBREF30 is used for question feature extraction. Finally, it is combined with an attention module to classify the answers. The Differential Networks architecture is illustrated in Fig. FIGREF5."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"08895a7f01e5eebcfdb83f1bb08a035ba8f9ef18",
"39a92d47cceb7ee2da8141d64217ac963031934f"
],
"answer": [
{
"evidence": [
"Teney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures."
],
"extractive_spans": [],
"free_form_answer": "Region-based CNN",
"highlighted_evidence": [
"Teney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Teney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures."
],
"extractive_spans": [
"R-CNN architecture"
],
"free_form_answer": "",
"highlighted_evidence": [
"Teney et al. Model BIBREF13: Teney et al. introduced the use of object detection on VQA models and won the VQA Challenge 2017. The model helps in narrowing down the features and apply better attention to images. The model employs the use of R-CNN architecture and showed significant performance in accuracy over other architectures."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"e0c677c269e4c0b4b225371c4a4cb596416633b4"
]
},
{
"annotation_id": [
"8c6595ca631e4767171045a4d6795ae24a3315a0",
"c2d65f0b3bdebe2e1475a9c25c093701fb9f20d2"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE I OVERVIEW OF VQA DATASETS DESCRIBED IN THIS PAPER."
],
"extractive_spans": [],
"free_form_answer": "How many giraffes are drinking water?",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE I OVERVIEW OF VQA DATASETS DESCRIBED IN THIS PAPER."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"VQA Dataset: The Visual Question Answering (VQA) dataset BIBREF0 is one of the largest datasets collected from the MS-COCO BIBREF18 dataset. The VQA dataset contains at least 3 questions per image with 10 answers per question. The dataset contains 614,163 questions in the form of open-ended and multiple choice. In multiple choice questions, the answers can be classified as: 1) Correct Answer, 2) Plausible Answer, 3) Popular Answers and 4) Random Answers. Recently, VQA V2 dataset BIBREF0 is released with additional confusing images. The VQA sample images and questions are shown in Fig. SECREF2 in 1st row and 1st column.",
"FLOAT SELECTED: TABLE I OVERVIEW OF VQA DATASETS DESCRIBED IN THIS PAPER."
],
"extractive_spans": [],
"free_form_answer": "Can you park here?\nIs something under the sink broken?\nDoes this man have children?",
"highlighted_evidence": [
" The VQA sample images and questions are shown in Fig. SECREF2 in 1st row and 1st column.",
"FLOAT SELECTED: TABLE I OVERVIEW OF VQA DATASETS DESCRIBED IN THIS PAPER."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"What are remaining challenges in VQA?",
"How quickly is this hybrid model trained? ",
"What are the new deep learning models discussed in the paper? ",
"What was the architecture of the 2017 Challenge Winner model?",
"What is an example of a common sense question?"
],
"question_id": [
"6167618e0c53964f3a706758bdf5e807bc5d7760",
"78a0c25b83cdeaeaf0a4781f502105a514b2af0e",
"08202b800a946b8283c2684e23b51c0ec1e8b2ac",
"00aea97f69290b496ed11eb45a201ad28d741460",
"4e1293592e41646a6f5f0cb00c75ee8de14eb668"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. The timeline of major breakthrough in Visual Question Answering (VQA) in last 5 years, ranging from DAQUAR in 2014 to Differential Networks in 2019.",
"TABLE I OVERVIEW OF VQA DATASETS DESCRIBED IN THIS PAPER.",
"Fig. 3. Vanilla VQA Network Model [1].",
"Fig. 4. Differential Networks Model [20].",
"TABLE II OVERVIEW OF MODELS DESCRIBED IN THIS PAPER. THE PYTHIA V0.1 IS THE BEST PERFORMING MODEL OVER VQA DATASET.",
"TABLE III THE ACCURACIES OBTAINED USING VANILLA VQA [1], STACKED ATTENTION NETWORKS [12] AND TENEY ET AL. [14] MODELS WHEN TRAINED ON VQA [1] AND VISUAL7W [9] DATASETS."
],
"file": [
"2-Figure1-1.png",
"2-TableI-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-TableII-1.png",
"4-TableIII-1.png"
]
} | [
"What are remaining challenges in VQA?",
"What was the architecture of the 2017 Challenge Winner model?",
"What is an example of a common sense question?"
] | [
[
"1909.01860-Conclusion-0"
],
[
"1909.01860-Deep Learning Based VQA Methods-3"
],
[
"1909.01860-2-TableI-1.png",
"1909.01860-Datasets-2"
]
] | [
" object level details, segmentation masks, and sentiment of the question",
"Region-based CNN",
"Can you park here?\nIs something under the sink broken?\nDoes this man have children?"
] | 274 |
1803.02155 | Self-Attention with Relative Position Representations | Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graph-labeled inputs. | {
"paragraphs": [
[
"Recent approaches to sequence to sequence learning typically leverage recurrence BIBREF0 , convolution BIBREF1 , BIBREF2 , attention BIBREF3 , or a combination of recurrence and attention BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 as basic building blocks. These approaches incorporate information about the sequential position of elements differently.",
"Recurrent neural networks (RNNs) typically compute a hidden state $h_t$ , as a function of their input at time $t$ and a previous hidden state $h_{t-1}$ , capturing relative and absolute positions along the time dimension directly through their sequential structure. Non-recurrent models do not necessarily consider input elements sequentially and may hence require explicitly encoding position information to be able to use sequence order.",
"One common approach is to use position encodings which are combined with input elements to expose position information to the model. These position encodings can be a deterministic function of position BIBREF8 , BIBREF3 or learned representations. Convolutional neural networks inherently capture relative positions within the kernel size of each convolution. They have been shown to still benefit from position encodings BIBREF1 , however.",
"For the Transformer, which employs neither convolution nor recurrence, incorporating explicit representations of position information is an especially important consideration since the model is otherwise entirely invariant to sequence ordering. Attention-based models have therefore used position encodings or biased attention weights based on distance BIBREF9 .",
"In this work we present an efficient way of incorporating relative position representations in the self-attention mechanism of the Transformer. Even when entirely replacing its absolute position encodings, we demonstrate significant improvements in translation quality on two machine translation tasks.",
"Our approach can be cast as a special case of extending the self-attention mechanism of the Transformer to considering arbitrary relations between any two elements of the input, a direction we plan to explore in future work on modeling labeled, directed graphs."
],
[
"The Transformer BIBREF3 employs an encoder-decoder structure, consisting of stacked encoder and decoder layers. Encoder layers consist of two sublayers: self-attention followed by a position-wise feed-forward layer. Decoder layers consist of three sublayers: self-attention followed by encoder-decoder attention, followed by a position-wise feed-forward layer. It uses residual connections around each of the sublayers, followed by layer normalization BIBREF10 . The decoder uses masking in its self-attention to prevent a given output position from incorporating information about future output positions during training.",
"Position encodings based on sinusoids of varying frequency are added to encoder and decoder input elements prior to the first layer. In contrast to learned, absolute position representations, the authors hypothesized that sinusoidal position encodings would help the model to generalize to sequence lengths unseen during training by allowing it to learn to attend also by relative position. This property is shared by our relative position representations which, in contrast to absolute position representations, are invariant to the total sequence length.",
"Residual connections help propagate position information to higher layers."
],
[
"Self-attention sublayers employ $h$ attention heads. To form the sublayer output, results from each head are concatenated and a parameterized linear transformation is applied.",
"Each attention head operates on an input sequence, $x = (x_1, \\ldots , x_n)$ of $n$ elements where $x_i \\in \\mathbb {R}^{d_x}$ , and computes a new sequence $z = (z_1, \\ldots , z_n)$ of the same length where $z_i \\in \\mathbb {R}^{d_z}$ .",
"Each output element, $z_i$ , is computed as weighted sum of a linearly transformed input elements: ",
"$$z_i = \\sum _{j=1}^{n} \\alpha _{ij} (x_jW^V)$$ (Eq. 3) ",
"Each weight coefficient, $\\alpha _{ij}$ , is computed using a softmax function: $\n\\alpha _{ij} = \\frac{ \\exp {e_{ij}} }{ \\sum _{k=1}^{n} \\exp {e_{ik}} }\n$ ",
"And $e_{ij}$ is computed using a compatibility function that compares two input elements: ",
"$$e_{ij} = \\frac{(x_iW^Q)(x_jW^K)^T}{\\sqrt{d_z}}$$ (Eq. 4) ",
"Scaled dot product was chosen for the compatibility function, which enables efficient computation. Linear transformations of the inputs add sufficient expressive power.",
" $W^Q$ , $W^K$ , $W^V \\in \\mathbb {R}^{d_x \\times d_z}$ are parameter matrices. These parameter matrices are unique per layer and attention head."
],
[
"We propose an extension to self-attention to consider the pairwise relationships between input elements. In this sense, we model the input as a labeled, directed, fully-connected graph.",
"The edge between input elements $x_i$ and $x_j$ is represented by vectors $a^V_{ij}, a^K_{ij} \\in \\mathbb {R}^{d_a}$ . The motivation for learning two distinct edge representations is that $a^V_{ij}$ and $a^K_{ij}$ are suitable for use in eq. ( 6 ) and eq. ( 7 ), respectively, without requiring additional linear transformations. These representations can be shared across attention heads. We use $d_a = d_z$ .",
"We modify eq. ( 3 ) to propagate edge information to the sublayer output: ",
"$$z_i = \\sum _{j=1}^{n} \\alpha _{ij} (x_jW^V + a^V_{ij})$$ (Eq. 6) ",
"This extension is presumably important for tasks where information about the edge types selected by a given attention head is useful to downstream encoder or decoder layers. However, as explored in \"Model Variations\" , this may not be necessary for machine translation.",
"We also, importantly, modify eq. ( 4 ) to consider edges when determining compatibility: ",
"$$e_{ij} = \\frac{x_iW^Q(x_jW^K+a^K_{ij})^T}{\\sqrt{d_z}}$$ (Eq. 7) ",
"The primary motivation for using simple addition to incorporate edge representations in eq. ( 6 ) and eq. ( 7 ) is to enable an efficient implementation described in \"Efficient Implementation\" ."
],
[
"For linear sequences, edges can capture information about the relative position differences between input elements. The maximum relative position we consider is clipped to a maximum absolute value of $k$ . We hypothesized that precise relative position information is not useful beyond a certain distance. Clipping the maximum distance also enables the model to generalize to sequence lengths not seen during training. Therefore, we consider $2k+1$ unique edge labels. $\na^K_{ij} &= w^K_{\\mathrm {clip}(j - i, k)} \\\\\na^V_{ij} &= w^V_{\\mathrm {clip}(j - i, k)} \\\\\n\\mathrm {clip}(x, k) &= \\max (-k, \\min (k, x))\n$ ",
"We then learn relative position representations $w^K = (w^K_{-k}, \\ldots , w^K_k)$ and $w^V = (w^V_{-k}, \\ldots , w^V_k)$ where $w^K_i, w^V_i \\in \\mathbb {R}^{d_a}$ ."
],
[
"There are practical space complexity concerns when considering edges between input elements, as noted by Veličković et al. velivckovic2017, which considers unlabeled graph inputs to an attention model.",
"For a sequence of length $n$ and $h$ attention heads, we reduce the space complexity of storing relative position representations from $O(hn^2d_a)$ to $O(n^2d_a)$ by sharing them across each heads. Additionally, relative position representations can be shared across sequences. Therefore, the overall self-attention space complexity increases from $O(bhnd_z)$ to $O(bhnd_z + n^2d_a)$ . Given $d_a = d_z$ , the size of the relative increase depends on $\\frac{n}{bh}$ .",
"The Transformer computes self-attention efficiently for all sequences, heads, and positions in a batch using parallel matrix multiplication operations BIBREF3 . Without relative position representations, each $e_{ij}$ can be computed using $bh$ parallel multiplications of $n \\times d_z$ and $d_z \\times n$ matrices. Each matrix multiplication computes $e_{ij}$ for all sequence positions, for a particular head and sequence. For any sequence and head, this requires sharing the same representation for each position across all compatibility function applications (dot products) with other positions.",
"When we consider relative positions the representations differ with different pairs of positions. This prevents us from computing all $e_{ij}$ for all pairs of positions in a single matrix multiplication. We also want to avoid broadcasting relative position representations. However, both issues can be resolved by splitting the computation of eq. ( 7 ) into two terms: ",
"$$e_{ij} = \\frac{x_iW^Q(x_jW^K)^T + x_iW^Q(a^K_{ij})^T}{\\sqrt{d_z}}$$ (Eq. 11) ",
"The first term is identical to eq. ( 4 ), and can be computed as described above. For the second term involving relative position representations, tensor reshaping can be used to compute $n$ parallel multiplications of $bh \\times d_z$ and $d_z \\times n$ matrices. Each matrix multiplication computes contributions to $e_{ij}$ for all heads and batches, corresponding to a particular sequence position. Further reshaping allows adding the two terms. The same approach can be used to efficiently compute eq. ( 6 ).",
"For our machine translation experiments, the result was a modest 7% decrease in steps per second, but we were able to maintain the same model and batch sizes on P100 GPUs as Vaswani et al. vaswani2017."
],
[
"We use the tensor2tensor library for training and evaluating our model.",
"We evaluated our model on the WMT 2014 machine translation task, using the WMT 2014 English-German dataset consisting of approximately 4.5M sentence pairs and the 2014 WMT English-French dataset consisting of approximately 36M sentence pairs.",
"For all experiments, we split tokens into a 32,768 word-piece vocabulary BIBREF7 . We batched sentence pairs by approximate length, and limited input and output tokens per batch to 4096 per GPU. Each resulting training batch contained approximately 25,000 source and 25,000 target tokens.",
"We used the Adam optimizer BIBREF11 with $\\beta _1=0.9$ , $\\beta _2=0.98$ , and $\\epsilon = 10^{-9}$ . We used the same warmup and decay strategy for learning rate as Vaswani et al. vaswani2017, with 4,000 warmup steps. During training, we employed label smoothing of value $\\epsilon _{ls} = 0.1$ BIBREF12 . For evaluation, we used beam search with a beam size of 4 and length penalty $\\alpha = 0.6$ BIBREF7 .",
"For our base model, we used 6 encoder and decoder layers, $d_x = 512$ , $d_z = 64$ , 8 attention heads, 1024 feed forward inner-layer dimensions, and $P_{dropout} = 0.1$ . When using relative position encodings, we used clipping distance $k = 16$ , and used unique edge representations per layer and head. We trained for 100,000 steps on 8 K40 GPUs, and did not use checkpoint averaging.",
"For our big model, we used 6 encoder and decoder layers, $d_x = 1024$ , $d_z = 64$ , 16 attention heads, 4096 feed forward inner-layer dimensions, and $P_{dropout} = 0.3$ for EN-DE and $P_{dropout} = 0.1$ for EN-FR. When using relative position encodings, we used $k = 8$ , and used unique edge representations per layer. We trained for 300,000 steps on 8 P100 GPUs, and averaged the last 20 checkpoints, saved at 10 minute intervals."
],
[
"We compared our model using only relative position representations to the baseline Transformer BIBREF3 with sinusoidal position encodings. We generated baseline results to isolate the impact of relative position representations from any other changes to the underlying library and experimental configuration.",
"For English-to-German our approach improved performance over our baseline by 0.3 and 1.3 BLEU for the base and big configurations, respectively. For English-to-French it improved by 0.5 and 0.3 BLEU for the base and big configurations, respectively. In our experiments we did not observe any benefit from including sinusoidal position encodings in addition to relative position representations. The results are shown in Table 1 ."
],
[
"We performed several experiments modifying various aspects of our model. All of our experiments in this section use the base model configuration without any absolute position representations. BLEU scores are calculated on the WMT English-to-German task using the development set, newstest2013.",
"We evaluated the effect of varying the clipping distance, $k$ , of the maximum absolute relative position difference. Notably, for $k \\ge 2$ , there does not appear to be much variation in BLEU scores. However, as we use multiple encoder layers, precise relative position information may be able to propagate beyond the clipping distance. The results are shown in Table 2 .",
"We also evaluated the impact of ablating each of the two relative position representations defined in section \"Conclusions\" , $a^V_{ij}$ in eq. ( 6 ) and $a^K_{ij}$ in eq. ( 7 ). Including relative position representations solely when determining compatibility between elements may be sufficient, but further work is needed to determine whether this is true for other tasks. The results are shown in Table 3 ."
],
[
"In this paper we presented an extension to self-attention that can be used to incorporate relative position information for sequences, which improves performance for machine translation.",
"For future work, we plan to extend this mechanism to consider arbitrary directed, labeled graph inputs to the Transformer. We are also interested in nonlinear compatibility functions to combine input representations and edge representations. For both of these extensions, a key consideration will be determining efficient implementations."
]
],
"section_name": [
"Introduction",
"Transformer",
"Self-Attention",
"Relation-aware Self-Attention",
"Relative Position Representations",
"Efficient Implementation",
"Experimental Setup",
"Machine Translation",
"Model Variations",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"4aedd36337434ba831df5159caac4756ca3c262f",
"ae6933639d9ce20ffcc3c50b49ef1c0f77c8d418"
],
"answer": [
{
"evidence": [
"For our machine translation experiments, the result was a modest 7% decrease in steps per second, but we were able to maintain the same model and batch sizes on P100 GPUs as Vaswani et al. vaswani2017."
],
"extractive_spans": [
"7% decrease in steps per second"
],
"free_form_answer": "",
"highlighted_evidence": [
"For our machine translation experiments, the result was a modest 7% decrease in steps per second, but we were able to maintain the same model and batch sizes on P100 GPUs as Vaswani et al. vaswani2017."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For our machine translation experiments, the result was a modest 7% decrease in steps per second, but we were able to maintain the same model and batch sizes on P100 GPUs as Vaswani et al. vaswani2017."
],
"extractive_spans": [
"a modest 7% decrease in steps per second"
],
"free_form_answer": "",
"highlighted_evidence": [
"For our machine translation experiments, the result was a modest 7% decrease in steps per second, but we were able to maintain the same model and batch sizes on P100 GPUs as Vaswani et al. vaswani2017."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"29f7e9a3066eeb565967829047eaf2a03776b74d"
]
},
{
"annotation_id": [
"092210ef8739ea9585b8c1bc0fe64425e287f6d4",
"63420066e09ba1e8cbb4523b9d8e94856b6bf5f6"
],
"answer": [
{
"evidence": [
"The edge between input elements $x_i$ and $x_j$ is represented by vectors $a^V_{ij}, a^K_{ij} \\in \\mathbb {R}^{d_a}$ . The motivation for learning two distinct edge representations is that $a^V_{ij}$ and $a^K_{ij}$ are suitable for use in eq. ( 6 ) and eq. ( 7 ), respectively, without requiring additional linear transformations. These representations can be shared across attention heads. We use $d_a = d_z$ ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The edge between input elements $x_i$ and $x_j$ is represented by vectors $a^V_{ij}, a^K_{ij} \\in \\mathbb {R}^{d_a}$ . "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Each output element, $z_i$ , is computed as weighted sum of a linearly transformed input elements:",
"$$z_i = \\sum _{j=1}^{n} \\alpha _{ij} (x_jW^V)$$ (Eq. 3)",
"And $e_{ij}$ is computed using a compatibility function that compares two input elements:",
"$$e_{ij} = \\frac{(x_iW^Q)(x_jW^K)^T}{\\sqrt{d_z}}$$ (Eq. 4)",
"We modify eq. ( 3 ) to propagate edge information to the sublayer output:",
"$$z_i = \\sum _{j=1}^{n} \\alpha _{ij} (x_jW^V + a^V_{ij})$$ (Eq. 6)",
"We also, importantly, modify eq. ( 4 ) to consider edges when determining compatibility:",
"$$e_{ij} = \\frac{x_iW^Q(x_jW^K+a^K_{ij})^T}{\\sqrt{d_z}}$$ (Eq. 7)"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Each output element, $z_i$ , is computed as weighted sum of a linearly transformed input elements:\n\n$$z_i = \\sum _{j=1}^{n} \\alpha _{ij} (x_jW^V)$$ (Eq. 3)",
"And $e_{ij}$ is computed using a compatibility function that compares two input elements:\n\n$$e_{ij} = \\frac{(x_iW^Q)(x_jW^K)^T}{\\sqrt{d_z}}$$ (Eq. 4)",
"We modify eq. ( 3 ) to propagate edge information to the sublayer output:\n\n$$z_i = \\sum _{j=1}^{n} \\alpha _{ij} (x_jW^V + a^V_{ij})$$ (Eq. 6)",
"We also, importantly, modify eq. ( 4 ) to consider edges when determining compatibility:\n\n$$e_{ij} = \\frac{x_iW^Q(x_jW^K+a^K_{ij})^T}{\\sqrt{d_z}}$$ (Eq. 7)"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"29f7e9a3066eeb565967829047eaf2a03776b74d",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"aae358e1de13091632e9e788300110145a6c15a6",
"c0ece84af20e9c6695125b64a050d6de2b7fbff2"
],
"answer": [
{
"evidence": [
"We also evaluated the impact of ablating each of the two relative position representations defined in section \"Conclusions\" , $a^V_{ij}$ in eq. ( 6 ) and $a^K_{ij}$ in eq. ( 7 ). Including relative position representations solely when determining compatibility between elements may be sufficient, but further work is needed to determine whether this is true for other tasks. The results are shown in Table 3 ."
],
"extractive_spans": [],
"free_form_answer": "Not sure",
"highlighted_evidence": [
"Including relative position representations solely when determining compatibility between elements may be sufficient, but further work is needed to determine whether this is true for other tasks. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"29f7e9a3066eeb565967829047eaf2a03776b74d",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How is the training time compared to the original position encoding? ",
"Does the new relative position encoder require more parameters?",
"Can the new position representation be generalized to other tasks?"
],
"question_id": [
"1d7b99646a1bc05beec633d7a3beb083ad1e8734",
"4d887ce7dc43528098e7a3d9cd13c6c36f158c53",
"d48b5e4a7cf1f96c5b939ba9b46350887c5e5268"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"research",
"research",
"research"
]
} | {
"caption": [
"Figure 1: Example edges representing relative positions, or the distance between elements. We learn representations for each relative position within a clipping distance k. The figure assumes 2 <= k <= n − 4. Note that not all edges are shown.",
"Table 1: Experimental results for WMT 2014 English-to-German (EN-DE) and English-to-French (EN-FR) translation tasks, using newstest2014 test set.",
"Table 2: Experimental results for varying the clipping distance, k.",
"Table 3: Experimental results for ablating relative position representations aVij and a K ij ."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} | [
"Can the new position representation be generalized to other tasks?"
] | [
[
"1803.02155-Model Variations-2"
]
] | [
"Not sure"
] | 276 |
1803.06745 | Sentiment Analysis of Code-Mixed Indian Languages: An Overview of SAIL_Code-Mixed Shared Task @ICON-2017 | Sentiment analysis is essential in many real-world applications such as stance detection, review analysis, recommendation system, and so on. Sentiment analysis becomes more difficult when the data is noisy and collected from social media. India is a multilingual country; people use more than one languages to communicate within themselves. The switching in between the languages is called code-switching or code-mixing, depending upon the type of mixing. This paper presents overview of the shared task on sentiment analysis of code-mixed data pairs of Hindi-English and Bengali-English collected from the different social media platform. The paper describes the task, dataset, evaluation, baseline and participant's systems. | {
"paragraphs": [
[
"The past decade witnessed rapid growth and widespread usage of social media platforms by generating a significant amount of user-generated text. The user-generated texts contain high information content in the form of news, expression, or knowledge. Automatically mining information from user-generated data is unraveling a new field of research in Natural Language Processing (NLP) and has been a difficult task due to unstructured and noisy nature. In spite of the existing challenges, much research has been conducted on user-generated data in the field of information extraction, sentiment analysis, event extraction, user profiling and many more.",
"According to Census of India, there are 22 scheduled languages and more than 100 non scheduled languages in India. There are 462 million internet users in India and most people know more than one language. They express their feelings or emotions using more than one languages, thus generating a new code-mixed/code-switched language. The problem of code-mixing and code-switching are well studied in the field of NLP BIBREF0 , BIBREF1 . Information extraction from Indian internet user-generated texts become more difficult due to this multilingual nature. Much research has been conducted in this field such as language identification BIBREF2 , BIBREF3 , part-of-speech tagging BIBREF4 . Joshi et al. JoshiPSV16 have performed sentiment analysis in Hindi-English (HI-EN) code-mixed data and almost no work exists on sentiment analysis of Bengali-English (BN-EN) code-mixed texts. The Sentiment Analysis of Indian Language (Code-Mixed) (SAIL _Code-Mixed) is a shared task at ICON-2017. Two most popular code-mixed languages namely Hindi and Bengali mixed with English were considered for the sentiment identification task. A total of 40 participants registered for the shared task and only nine teams have submitted their predicted outputs. Out of nine unique submitted systems for evaluation, eight teams submitted fourteen runs for HI-EN dataset whereas seven teams submitted nine runs for BN-EN dataset. The training and test dataset were provided after annotating the languages and sentiment (positive, negative, and neutral) tags. The language tags were automatically annotated with the help of different dictionaries whereas the sentiment tags were manually annotated. The submitted systems are ranked using the macro average f-score.",
"The paper is organized as following manner. Section SECREF2 describes the NLP in Indian languages mainly related to code-mixing and sentiment analysis. The detailed statistics of the dataset and evaluation are described in Section SECREF3 . The baseline systems and participant's system description are described in Section SECREF4 . Finally, conclusion and future research are drawn in Section SECREF5 ."
],
[
"With the rise of social media and user-generated data, information extraction from user-generated text became an important research area. Social media has become the voice of many people over decades and it has special relations with real time events. The multilingual user have tendency to mix two or more languages while expressing their opinion in social media, this phenomenon leads to generate a new code-mixed language. So far, many studies have been conducted on why the code-mixing phenomena occurs and can be found in Kim kim2006reasons. Several experiments have been performed on social media texts including code-mixed data. The first step toward information gathering from these texts is to identify the languages present. Till date, several language identification experiments or tasks have been performed on several code-mixed language pairs such as Spanish-English BIBREF5 , BIBREF6 , French-English BIBREF7 , Hindi-English BIBREF0 , BIBREF1 , Hindi-English-Bengali BIBREF8 , Bengali-English BIBREF1 . Many shared tasks have also been organized for language identification of code-mixed texts. Language Identification in Code-Switched Data was one of the shared tasks which covered four language pairs such as Spanish-English, Modern Standard Arabic and Arabic dialects, Chinese-English, and Nepalese-English. In the case of Indian languages, Mixed Script Information Retrieval BIBREF9 shared task at FIRE-2015 was organized for eight code-mixed Indian languages such as Bangla, Gujarati, Hindi, Kannada, Malayalam, Marathi, Tamil, and Telugu mixed with English.",
"The second step is the identification of Part-of-Speech (POS) tags in code-mixed data and only handful of experiments have been performed in it such as Spanish-English BIBREF10 , Hindi-English BIBREF11 . POS Tagging for Code-mixed Indian Social Media shared task was organized for language pairs such as Bengali-English, Hindi-English, and Telugu-English. However, to best of the authors' knowledge no tasks on POS tagging were found on other code-mixed Indian languages. Again, Named Entity Recognition (NER) of code-mixed language shared task was organized for identifying named entities in Hindi-English and Tamil-English code-mixed data BIBREF12 .",
"Sentiment analysis or opinion mining from code-mixed data is one of the difficult tasks and the reasons are listed below.",
"Sentiment analysis of Hindi-English code-mixed was performed by Joshi et al. JoshiPSV16 which used sub-word level representations in LSTM architecture to perform it. This is one of the initial tasks in sentiment analysis of HI-EN code-mixed dataset. There are several applications on code-mixed data which depends on sentiment analysis such as stance detection, aspect based sentiment analysis. However, there are several tasks available on sentiment analysis of Indian language tweets BIBREF13 , BIBREF14 . The shared task on sentiment analysis in Indian languages (SAIL) tweets focused on sentiment analysis of three Indian languages: Bengali, Hindi, and Tamil BIBREF13 ."
],
[
"This section describes statistics of the dataset and the evaluation procedure. Preparing a gold standard dataset is the first step towards achieving good accuracy. Several tasks in the field of NLP suffer from lack of gold standard dataset. In the case of Indian languages, there is no such code-mixed dataset available for research purpose. Thus, we developed the dataset and the details are provided below."
],
[
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API. The collected words are mostly sentiment words in Romanized format. Plenty of tweets had noisy words such as words from other languages and words in utf-8 format. After collection of code-mixed tweets, some were rejected. There are three reasons for which a tweet was rejected.",
"a tweet is incomplete, i.e. there is not much information available in the tweet.",
"a tweet is spam, advertisement or slang.",
"a tweet does not have either Bengali or Hindi words.",
"The hashtags and urls are kept unchanged. Then words are automatically tagged with language information using a dictionary which is developed manually. Finally, tweets are manually annotated with the positive, negative, and neutral polarity. Missed language tags or wrongly annotated language tags are corrected manually during sentiment annotation.",
"Any of the six language tags is used to annotate the language to each of the words and these are HI (Hindi), EN (English), BN (Bengali), UN(Universal), MIX (Mix of two languages), EMT (emoticons). MIX words are basically the English words with Hindi or Bengali suffix, for example, Delhite (in Delhi). Sometimes, the words are joined together by mistake due to the typing errors, for example, jayegiTension (tension will go away). UN words are basically symbols, hashtags, or name etc. The statistics of training and test tweets for Bengali and Hindi code-mixed datasets are provided in Table TABREF23 . Some examples of HI-EN and BN-EN datasets with sentiment tags are given below.",
"BI-EN: Irrfan Khan hollywood e abar dekha debe, trailer ta toh awesome ar acting o enjoyable. (positive)",
"Tagged: Irrfan/EN Khan/EN hollywood/EN e/BN abar/BN dekha/BN debe/BN ,/UN trailer/EN ta/BN toh/BN awesome/EN ar/BN acting/EN o/BN enjoyable/EN ./UN",
"Translation: Irrfan Khan will be seen in Hollywood again, trailer is awesome and acting is also enjoyable.",
"BI-EN: Ei movie take bar bar dekheo er matha mundu kichui bojha jaye na. Everything boddo confusing and amar mote not up to the mark. (negative)",
"Tagged: Ei/BN movie/EN take/BN bar/BN bar/BN dekheo/BN er/BN matha/BN mundu/BN kichui/BN bojha/BN jaye/BN na/BN ./UN Everything/EN boddo/BN confusing/EN and/EN amar/BN mote/BN not/EN up/EN to/EN the/EN mark/EN ./UN",
"Translation: After watching repeated times I can't understand anything. Everything is so confusing and I think its not up to the mark.",
"HI-EN: bhai jan duaa hei k appki film sooper dooper hit ho (positive)",
"Tagged: bhai/HI jan/HI duaa/HI hei/HI k/HI appki/HI film/HI sooper/EN dooper/HI hit/HI ho/HI ./UN",
"Translation: Brother I pray that your film will be a super duper hit.",
"HI-EN: yaaaro yeah #railbudget2015 kitne baaje start hooga ? (neutral)",
"Tagged: yaaaro/HI yeah/EN #railbudget2015/EN kitne/HI baaje/HI start/EN hooga/EN ?/UN",
"Translation: Friends, when will #railbudget2015 start?"
],
[
"The precision, recall and f-score are calculated using the sklearn package of scikit-learn BIBREF15 . The macro average f-score is used to rank the submitted systems, because it independently calculates the metric for each classes and then takes the average hence treating all classes equally. Two different types of evaluation are considered and these are described below.",
"Overall: The macro average precision, recall, and f-score are calculated for all submitted runs.",
"Two way: Then, two way classification approach is used where the system will be evaluated on two classes. For positive sentiment calculation, the predicted negative and neutral tags are converted to other for both gold and predicted output by making the task as binary classification. Then, the macro averaged precision, recall, and f-score are calculated. Similar process is also applied for negative and neural metrics calculation."
],
[
"The baseline systems are developed by randomly assigning any of the sentiment values to each of the test instances. Then, similar evaluation techniques are applied to the baseline system and the results are presented in Table TABREF29 ."
],
[
"This subsection describes the details of systems submitted for the shared task. Six teams have submitted their system details and those are described below in order of decreasing f-score.",
"IIIT-NBP team used features like GloVe word embeddings with 300 dimension and TF-IDF scores of word n-grams (one-gram, two-grams and tri-grams) as well as character n-grams (n varying from 2 to 6). Sklearn BIBREF15 package is used to calculate the TF-IDF. Finally, two classifiers: ensemble voting (consisting of three classifiers - linear SVM, logistic regression and random forests) and linear SVM are used for classification.",
"JU_KS team used n-gram and sentiment lexicon based features. Small sentiment lexicons are manually prepared for both English and Bengali words. However, no sentiment lexicon is used for Hindi language. Bengali sentiment lexicon consists of a collection of 1700 positive and 3750 negative words whereas English sentiment lexicon consists of 2006 positive and 4783 negative words. Finally, Naïve Bayes multinomial is used to classify and system results are presented in Table TABREF29 .",
"BIT Mesra team submitted systems for only HI-EN dataset. During preprocessing, they removed words having UN language tags, URLs, hashtags and user mentions. An Emoji dictionary was prepared with sentiment tags. Finally, they used SVM and Naïve Bayes classifiers on uni-gram and bi-gram features to classify sentiment of the code-mixed HI-EN dataset only.",
"NLP_CEN_AMRITA team have used different distributional and distributed representation. They used Document Term Matrix with N-gram varying from 1 to 5 for the representation and Support Vector Machines (SVM) as a classifier to make the final prediction. Their system performed well for n-grams range 5 and minimum document frequency 2 using the linear kernel.",
"CFIL team uses simple deep learning for sentiment analysis on code-mixed data. The fastText tool is used to create word embeddings on sentiment corpus. Additionally, Convolutional Neural Networks was used to extract sub-word features. Bi-LSTM layer is used on word embedding and sub-word features together with max-pooling at the output which is again sent to a softmax layer for prediction. No additional features are used and hyper-parameters are selected after dividing training corpus to 70% and 30%.",
"Subway team submitted systems for HI-EN dataset only. Initially, words other than HI and EN tags are removed during the cleaning process. Then, a dictionary with bi-grams and tri-grams are collected from training data and sentiment polarity is annotated manually. TF-IDF scores for each matched n-grams are calculated and weights of 1.3 and 0.7 are assigned to bi-grams and tri-grams, respectively. Finally, Naïve Bayes classifier is used to get the sentiment."
],
[
"The baseline systems achieved better scores compared to CEN@AMRIT and SVNIT teams for HI-EN dataset; and AMRITA_CEN, CEN@Amrita and SVNIT teams for BN-EN dataset. IIIT-NBP team has achieved the maximum macro average f-score of 0.569 across all the sentiment classes for HI-EN dataset. IIIT-NBP also achieved the maximum macro average f-score of 0.526 for BN-EN dataset. Two way classification of HI-EN dataset achieved the maximum macro average f-score of 0.707, 0.666, and 0.663 for positive, negative, and neutral, respectively. Similarly, the two way classification of BN-EN dataset achieved the maximum average f-score of 0.641, 0.677, and 0.621 for positive, negative, and neutral, respectively. Again, the f-measure achieved using HI-EN dataset is better than BN-EN. The obvious reason for such result is that there are more instances in HI-EN than BN-EN dataset.",
"Most of the teams used the n-gram based features and it resulted in better macro average f-score. Most teams used the sklearn for identifying n-grams. IIITH-NBP team is only team to use character n-grams. Word embeddings is another important feature used by several teams. For word embeddings, Gensim and fastText are used. JU_KS team has used sentiment lexicon based features for BN-EN dataset only. BITMesra team has used emoji dictionary annotated with sentiment. Hashtags are considered to be one of the most important features for sentiment analysis BIBREF16 , however they removed hashtags during sentiment identification.",
"Apart from the features, most of the teams used machine learning algorithms like SVM, Naïve Bayes. It is observed that the deep learning models are quite successful for many NLP tasks. CFIL team have used the deep learning framework however the deep learning based system did not perform well as compared to machine learning based system. The main reason for the above may be that the training datasets provided are not sufficient to built a deep learning model."
],
[
"This paper presents the details of shared task held during the ICON 2017. The competition presents the sentiment identification task from HI-EN and BN-EN code-mixed datasets. A random baseline system obtained macro average f-score of 0.331 and 0.339 for HI-EN and BN-EN datasets, respectively. The best performing team obtained maximum macro average f-score of 0.569 and 0.526 for HI-EN and BN-EN datasets, respectively. The team used word and character level n-grams as features and SVM for sentiment classification. We plan to enhance the current dataset and include more data pairs in the next version of the shared task. In future, more advanced task like aspect based sentiment analysis and stance detection can be performed on code-mixed dataset."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset and Evaluation",
"Dataset",
"Evaluation",
"Baseline",
"System Descriptions",
"Results and Discussion",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"cf3f921461540c3668a0e7939ebd9316335a3243",
"fe3830c3b562b984d044c09f807e4d189973d4a9"
],
"answer": [
{
"evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API. The collected words are mostly sentiment words in Romanized format. Plenty of tweets had noisy words such as words from other languages and words in utf-8 format. After collection of code-mixed tweets, some were rejected. There are three reasons for which a tweet was rejected."
],
"extractive_spans": [
"Twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API. The collected words are mostly sentiment words in Romanized format. Plenty of tweets had noisy words such as words from other languages and words in utf-8 format. After collection of code-mixed tweets, some were rejected. There are three reasons for which a tweet was rejected."
],
"extractive_spans": [
"Twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"33369cb609af1766b9c3e08106821d4093731415",
"3504052c86f022b01f12418898625b30ea4e9bfa"
],
"answer": [
{
"evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API. The collected words are mostly sentiment words in Romanized format. Plenty of tweets had noisy words such as words from other languages and words in utf-8 format. After collection of code-mixed tweets, some were rejected. There are three reasons for which a tweet was rejected.",
"Any of the six language tags is used to annotate the language to each of the words and these are HI (Hindi), EN (English), BN (Bengali), UN(Universal), MIX (Mix of two languages), EMT (emoticons). MIX words are basically the English words with Hindi or Bengali suffix, for example, Delhite (in Delhi). Sometimes, the words are joined together by mistake due to the typing errors, for example, jayegiTension (tension will go away). UN words are basically symbols, hashtags, or name etc. The statistics of training and test tweets for Bengali and Hindi code-mixed datasets are provided in Table TABREF23 . Some examples of HI-EN and BN-EN datasets with sentiment tags are given below.",
"FLOAT SELECTED: Table 1: Statistics of the training and test dataset with respect to sentiment"
],
"extractive_spans": [],
"free_form_answer": "18461 for Hindi-English and 5538 for Bengali-English",
"highlighted_evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API. The collected words are mostly sentiment words in Romanized format. Plenty of tweets had noisy words such as words from other languages and words in utf-8 format. ",
"Any of the six language tags is used to annotate the language to each of the words and these are HI (Hindi), EN (English), BN (Bengali), UN(Universal), MIX (Mix of two languages), EMT (emoticons). MIX words are basically the English words with Hindi or Bengali suffix, for example, Delhite (in Delhi). Sometimes, the words are joined together by mistake due to the typing errors, for example, jayegiTension (tension will go away). UN words are basically symbols, hashtags, or name etc. The statistics of training and test tweets for Bengali and Hindi code-mixed datasets are provided in Table TABREF23 ",
"FLOAT SELECTED: Table 1: Statistics of the training and test dataset with respect to sentiment"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Any of the six language tags is used to annotate the language to each of the words and these are HI (Hindi), EN (English), BN (Bengali), UN(Universal), MIX (Mix of two languages), EMT (emoticons). MIX words are basically the English words with Hindi or Bengali suffix, for example, Delhite (in Delhi). Sometimes, the words are joined together by mistake due to the typing errors, for example, jayegiTension (tension will go away). UN words are basically symbols, hashtags, or name etc. The statistics of training and test tweets for Bengali and Hindi code-mixed datasets are provided in Table TABREF23 . Some examples of HI-EN and BN-EN datasets with sentiment tags are given below.",
"FLOAT SELECTED: Table 1: Statistics of the training and test dataset with respect to sentiment"
],
"extractive_spans": [],
"free_form_answer": "HI-EN dataset has total size of of 18461 while BN-EN has total size of 5538. ",
"highlighted_evidence": [
"The statistics of training and test tweets for Bengali and Hindi code-mixed datasets are provided in Table TABREF23 .",
"FLOAT SELECTED: Table 1: Statistics of the training and test dataset with respect to sentiment"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1392d828e73820b62bff5411799fbc05e0223d32",
"99834bdff8e97e36b3e3345f8485fcef358601d9"
],
"answer": [
{
"evidence": [
"This subsection describes the details of systems submitted for the shared task. Six teams have submitted their system details and those are described below in order of decreasing f-score."
],
"extractive_spans": [
"Six"
],
"free_form_answer": "",
"highlighted_evidence": [
"Six teams have submitted their system details and those are described below in order of decreasing f-score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"According to Census of India, there are 22 scheduled languages and more than 100 non scheduled languages in India. There are 462 million internet users in India and most people know more than one language. They express their feelings or emotions using more than one languages, thus generating a new code-mixed/code-switched language. The problem of code-mixing and code-switching are well studied in the field of NLP BIBREF0 , BIBREF1 . Information extraction from Indian internet user-generated texts become more difficult due to this multilingual nature. Much research has been conducted in this field such as language identification BIBREF2 , BIBREF3 , part-of-speech tagging BIBREF4 . Joshi et al. JoshiPSV16 have performed sentiment analysis in Hindi-English (HI-EN) code-mixed data and almost no work exists on sentiment analysis of Bengali-English (BN-EN) code-mixed texts. The Sentiment Analysis of Indian Language (Code-Mixed) (SAIL _Code-Mixed) is a shared task at ICON-2017. Two most popular code-mixed languages namely Hindi and Bengali mixed with English were considered for the sentiment identification task. A total of 40 participants registered for the shared task and only nine teams have submitted their predicted outputs. Out of nine unique submitted systems for evaluation, eight teams submitted fourteen runs for HI-EN dataset whereas seven teams submitted nine runs for BN-EN dataset. The training and test dataset were provided after annotating the languages and sentiment (positive, negative, and neutral) tags. The language tags were automatically annotated with the help of different dictionaries whereas the sentiment tags were manually annotated. The submitted systems are ranked using the macro average f-score."
],
"extractive_spans": [
"nine"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Sentiment Analysis of Indian Language (Code-Mixed) (SAIL _Code-Mixed) is a shared task at ICON-2017. Two most popular code-mixed languages namely Hindi and Bengali mixed with English were considered for the sentiment identification task. A total of 40 participants registered for the shared task and only nine teams have submitted their predicted outputs. Out of nine unique submitted systems for evaluation, eight teams submitted fourteen runs for HI-EN dataset whereas seven teams submitted nine runs for BN-EN dataset"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"c2c37e6aa2d54bdff484eaca31909649fe27c3ff",
"f4290012f6b0da47149bbe077012672b00fb2415"
],
"answer": [
{
"evidence": [
"The baseline systems are developed by randomly assigning any of the sentiment values to each of the test instances. Then, similar evaluation techniques are applied to the baseline system and the results are presented in Table TABREF29 ."
],
"extractive_spans": [],
"free_form_answer": "Random labeling",
"highlighted_evidence": [
"\nThe baseline systems are developed by randomly assigning any of the sentiment values to each of the test instances. Then, similar evaluation techniques are applied to the baseline system and the results are presented in Table TABREF29 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The baseline systems are developed by randomly assigning any of the sentiment values to each of the test instances. Then, similar evaluation techniques are applied to the baseline system and the results are presented in Table TABREF29 ."
],
"extractive_spans": [
" randomly assigning any of the sentiment values to each of the test instances"
],
"free_form_answer": "",
"highlighted_evidence": [
"The baseline systems are developed by randomly assigning any of the sentiment values to each of the test instances."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"095a03c7e1ede464850ae3b702273a4aac997c01",
"bd4cb97c9f37e8599ab1c8cda374159d5cca541c"
],
"answer": [
{
"evidence": [
"The precision, recall and f-score are calculated using the sklearn package of scikit-learn BIBREF15 . The macro average f-score is used to rank the submitted systems, because it independently calculates the metric for each classes and then takes the average hence treating all classes equally. Two different types of evaluation are considered and these are described below."
],
"extractive_spans": [
"precision, recall and f-score "
],
"free_form_answer": "",
"highlighted_evidence": [
"The precision, recall and f-score are calculated using the sklearn package of scikit-learn BIBREF15 . The macro average f-score is used to rank the submitted systems, because it independently calculates the metric for each classes and then takes the average hence treating all classes equally."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The precision, recall and f-score are calculated using the sklearn package of scikit-learn BIBREF15 . The macro average f-score is used to rank the submitted systems, because it independently calculates the metric for each classes and then takes the average hence treating all classes equally. Two different types of evaluation are considered and these are described below.",
"Overall: The macro average precision, recall, and f-score are calculated for all submitted runs.",
"Two way: Then, two way classification approach is used where the system will be evaluated on two classes. For positive sentiment calculation, the predicted negative and neutral tags are converted to other for both gold and predicted output by making the task as binary classification. Then, the macro averaged precision, recall, and f-score are calculated. Similar process is also applied for negative and neural metrics calculation."
],
"extractive_spans": [
"The macro average precision, recall, and f-score"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two different types of evaluation are considered and these are described below.\n\nOverall: The macro average precision, recall, and f-score are calculated for all submitted runs.\n\nTwo way: Then, two way classification approach is used where the system will be evaluated on two classes. For positive sentiment calculation, the predicted negative and neutral tags are converted to other for both gold and predicted output by making the task as binary classification. Then, the macro averaged precision, recall, and f-score are calculated. Similar process is also applied for negative and neural metrics calculation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ba71e5450576d4ea173e6ae75aef55007726deca",
"deeb6a933651073f7f8ae8d7915f9263576874b2"
],
"answer": [
{
"evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API. The collected words are mostly sentiment words in Romanized format. Plenty of tweets had noisy words such as words from other languages and words in utf-8 format. After collection of code-mixed tweets, some were rejected. There are three reasons for which a tweet was rejected.",
"Any of the six language tags is used to annotate the language to each of the words and these are HI (Hindi), EN (English), BN (Bengali), UN(Universal), MIX (Mix of two languages), EMT (emoticons). MIX words are basically the English words with Hindi or Bengali suffix, for example, Delhite (in Delhi). Sometimes, the words are joined together by mistake due to the typing errors, for example, jayegiTension (tension will go away). UN words are basically symbols, hashtags, or name etc. The statistics of training and test tweets for Bengali and Hindi code-mixed datasets are provided in Table TABREF23 . Some examples of HI-EN and BN-EN datasets with sentiment tags are given below."
],
"extractive_spans": [],
"free_form_answer": "Bengali-English and Hindi-English",
"highlighted_evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API. The collected words are mostly sentiment words in Romanized format. Plenty of tweets had noisy words such as words from other languages and words in utf-8 format. After collection of code-mixed tweets, some were rejected. There are three reasons for which a tweet was rejected.",
"Any of the six language tags is used to annotate the language to each of the words and these are HI (Hindi), EN (English), BN (Bengali), UN(Universal), MIX (Mix of two languages), EMT (emoticons). MIX words are basically the English words with Hindi or Bengali suffix, for example, Delhite (in Delhi). Sometimes, the words are joined together by mistake due to the typing errors, for example, jayegiTension (tension will go away). UN words are basically symbols, hashtags, or name etc. The statistics of training and test tweets for Bengali and Hindi code-mixed datasets are provided in Table TABREF23 . Some examples of HI-EN and BN-EN datasets with sentiment tags are given below."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter. Initially, common Bengali and Hindi words were collected and then searched using the above API. The collected words are mostly sentiment words in Romanized format. Plenty of tweets had noisy words such as words from other languages and words in utf-8 format. After collection of code-mixed tweets, some were rejected. There are three reasons for which a tweet was rejected."
],
"extractive_spans": [
"HI-EN",
"BN-EN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Data collection is a time consuming and tedious task in terms of human resource. Two code-mixed data pairs HI-EN and BN-EN are provided for developing sentiment analysis systems. The Twitter4j API was used to collect both Bengali and Hindi code-mixed data from Twitter."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"which social media platforms was the data collected from?",
"how many data pairs were there for each dataset?",
"how many systems were there?",
"what was the baseline?",
"what metrics did they use for evaluation?",
"what datasets did they use?"
],
"question_id": [
"de344aeb089affebd15a8c370ae9ab5734e99203",
"84327a0a9321bf266e22d155dfa94828784595ce",
"c2037887945abbdf959389dc839a86bc82594505",
"e9a0a69eacd554141f56b60ab2d1912cc33f526a",
"5b2839bef513e5d441f0bb8352807f673f4b2070",
"2abf916bc03222d3b2a3d66851d87921ff35c0d2"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Statistics of the training and test dataset with respect to sentiment",
"Table 2: SAIL CodeMixed 2017 rankings ordered by F-Score of Overall Systems. (P: Precision, R: Recall, F: F-score)"
],
"file": [
"5-Table1-1.png",
"6-Table2-1.png"
]
} | [
"how many data pairs were there for each dataset?",
"what was the baseline?",
"what datasets did they use?"
] | [
[
"1803.06745-Dataset-5",
"1803.06745-Dataset-0",
"1803.06745-5-Table1-1.png"
],
[
"1803.06745-Baseline-0"
],
[
"1803.06745-Dataset-5",
"1803.06745-Dataset-0"
]
] | [
"HI-EN dataset has total size of of 18461 while BN-EN has total size of 5538. ",
"Random labeling",
"Bengali-English and Hindi-English"
] | 277 |
1709.05295 | And That's A Fact: Distinguishing Factual and Emotional Argumentation in Online Dialogue | We investigate the characteristics of factual and emotional argumentation styles observed in online debates. Using an annotated set of"factual"and"feeling"debate forum posts, we extract patterns that are highly correlated with factual and emotional arguments, and then apply a bootstrapping methodology to find new patterns in a larger pool of unannotated forum posts. This process automatically produces a large set of patterns representing linguistic expressions that are highly correlated with factual and emotional language. Finally, we analyze the most discriminating patterns to better understand the defining characteristics of factual and emotional arguments. | {
"paragraphs": [
[
"Human lives are being lived online in transformative ways: people can now ask questions, solve problems, share opinions, or discuss current events with anyone they want, at any time, in any location, on any topic. The purposes of these exchanges are varied, but a significant fraction of them are argumentative, ranging from hot-button political controversies (e.g., national health care) to religious interpretation (e.g., Biblical exegesis). And while the study of the structure of arguments has a long lineage in psychology BIBREF0 and rhetoric BIBREF1 , large shared corpora of natural informal argumentative dialogues have only recently become available.",
"Natural informal dialogues exhibit a much broader range of argumentative styles than found in traditional work on argumentation BIBREF2 , BIBREF0 , BIBREF3 , BIBREF4 . Recent work has begun to model different aspects of these natural informal arguments, with tasks including stance classification BIBREF5 , BIBREF6 , argument summarization BIBREF7 , sarcasm detection BIBREF8 , and work on the detailed structure of arguments BIBREF9 , BIBREF10 , BIBREF11 . Successful models of these tasks have many possible applications in sentiment detection, automatic summarization, argumentative agents BIBREF12 , and in systems that support human argumentative behavior BIBREF13 .",
"Our research examines factual versus feeling argument styles, drawing on annotations provided in the Internet Argument Corpus (IAC) BIBREF6 . This corpus includes quote-response pairs that were manually annotated with respect to whether the response is primarily a factual or feeling based argument, as Section SECREF2 describes in more detail. Figure FIGREF1 provides examples of responses in the IAC (paired with preceding quotes to provide context), along with the response's factual vs. feeling label.",
"factual responses may try to bolster their argument by providing statistics related to a position, giving historical or scientific background, or presenting specific examples or data. There is clearly a relationship between a proposition being factual versus objective or veridical, although each of these different labelling tasks may elicit differences from annotators BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .",
"The feeling responses may seem to lack argumentative merit, but previous work on argumentation describes situations in which such arguments can be effective, such as the use of emotive arguments to draw attention away from the facts, or to frame a discussion in a particular way BIBREF18 , BIBREF19 . Furthermore, work on persuasion suggest that feeling based arguments can be more persuasive in particular circumstances, such as when the hearer shares a basis for social identity with the source (speaker) BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . However none of this work has documented the linguistic patterns that characterize the differences in these argument types, which is a necessary first step to their automatic recognition or classification. Thus the goal of this paper is to use computational methods for pattern-learning on conversational arguments to catalog linguistic expressions and stylistic properties that distinguish Factual from Emotional arguments in these on-line debate forums.",
"Section SECREF2 describes the manual annotations for factual and feeling in the IAC corpus. Section SECREF5 then describes how we generate lexico-syntactic patterns that occur in both types of argument styles. We use a weakly supervised pattern learner in a bootstrapping framework to automatically generate lexico-syntactic patterns from both annotated and unannotated debate posts. Section SECREF3 evaluates the precision and recall of the factual and feeling patterns learned from the annotated texts and after bootstrapping on the unannotated texts. We also present results for a supervised learner with bag-of-word features to assess the difficulty of this task. Finally, Section SECREF4 presents analyses of the linguistic expressions found by the pattern learner and presents several observations about the different types of linguistic structures found in factual and feeling based argument styles. Section SECREF5 discusses related research, and Section SECREF6 sums up and proposes possible avenues for future work."
],
[
"We first describe the corpus of online debate posts used for our research, and then present a bootstrapping method to identify linguistic expressions associated with factual and feeling arguments."
],
[
"The IAC corpus is a freely available annotated collection of 109,553 forum posts (11,216 discussion threads). In such forums, conversations are started by posting a topic or a question in a particular category, such as society, politics, or religion BIBREF6 . Forum participants can then post their opinions, choosing whether to respond directly to a previous post or to the top level topic (start a new thread). These discussions are essentially dialogic; however the affordances of the forum such as asynchrony, and the ability to start a new thread rather than continue an existing one, leads to dialogic structures that are different than other multiparty informal conversations BIBREF25 . An additional source of dialogic structure in these discussions, above and beyond the thread structure, is the use of the quote mechanism, which is an interface feature that allows participants to optionally break down a previous post into the components of its argument and respond to each component in turn.",
"The IAC includes 10,003 Quote-Response (Q-R) pairs with annotations for factual vs. feeling argument style, across a range of topics. Figure FIGREF4 shows the wording of the survey question used to collect the annotations. Fact vs. Feeling was measured as a scalar ranging from -5 to +5, because previous work suggested that taking the means of scalar annotations reduces noise in Mechanical Turk annotations BIBREF26 . Each of the pairs was annotated by 5-7 annotators. For our experiments, we use only the response texts and assign a binary Fact or Feel label to each response: texts with score INLINEFORM0 1 are assigned to the fact class and texts with score INLINEFORM1 -1 are assigned to the feeling class. We did not use the responses with scores between -1 and 1 because they had a very weak Fact/Feeling assessment, which could be attributed to responses either containing aspects of both factual and feeling expression, or neither. The resulting set contains 3,466 fact and 2,382 feeling posts. We randomly partitioned the fact/feel responses into three subsets: a training set with 70% of the data (2,426 fact and 1,667 feeling posts), a development (tuning) set with 20% of the data (693 fact and 476 feeling posts), and a test set with 10% of the data (347 fact and 239 feeling posts). For the bootstrapping method, we also used 11,560 responses from the unannotated data."
],
[
"The goal of our research is to gain insights into the types of linguistic expressions and properties that are distinctive and common in factual and feeling based argumentation. We also explore whether it is possible to develop a high-precision fact vs. feeling classifier that can be applied to unannotated data to find new linguistic expressions that did not occur in our original labeled corpus.",
"To accomplish this, we use the AutoSlog-TS system BIBREF27 to extract linguistic expressions from the annotated texts. Since the IAC also contains a large collection of unannotated texts, we then embed AutoSlog-TS in a bootstrapping framework to learn additional linguistic expressions from the unannotated texts. First, we briefly describe the AutoSlog-TS pattern learner and the set of pattern templates that we used. Then, we present the bootstrapping process to learn more Fact/Feeling patterns from unannotated texts.",
"To learn patterns from texts labeled as fact or feeling arguments, we use the AutoSlog-TS BIBREF27 extraction pattern learner, which is freely available for research. AutoSlog-TS is a weakly supervised pattern learner that requires training data consisting of documents that have been labeled with respect to different categories. For our purposes, we provide AutoSlog-TS with responses that have been labeled as either fact or feeling.",
"AutoSlog-TS uses a set of syntactic templates to define different types of linguistic expressions. The left-hand side of Figure FIGREF8 shows the set of syntactic templates defined in the AutoSlog-TS software package. PassVP refers to passive voice verb phrases (VPs), ActVP refers to active voice VPs, InfVP refers to infinitive VPs, and AuxVP refers to VPs where the main verb is a form of “to be” or “to have”. Subjects (subj), direct objects (dobj), noun phrases (np), and possessives (genitives) can be extracted by the patterns. AutoSlog-TS applies the Sundance shallow parser BIBREF28 to each sentence and finds every possible match for each pattern template. For each match, the template is instantiated with the corresponding words in the sentence to produce a specific lexico-syntactic expression. The right-hand side of Figure FIGREF8 shows an example of a specific lexico-syntactic pattern that corresponds to each general pattern template.",
"In addition to the original 17 pattern templates in AutoSlog-TS (shown in Figure FIGREF8 ), we defined 7 new pattern templates for the following bigrams and trigrams: Adj Noun, Adj Conj Adj, Adv Adv, Adv Adv Adv, Adj Adj, Adv Adj, Adv Adv Adj. We added these n-gram patterns to provide coverage for adjective and adverb expressions because the original templates were primarily designed to capture noun phrase and verb phrase expressions.",
"The learning process in AutoSlog-TS has two phases. In the first phase, the pattern templates are applied to the texts exhaustively, so that lexico-syntactic patterns are generated for (literally) every instantiation of the templates that appear in the corpus. In the second phase, AutoSlog-TS uses the labels associated with the texts to compute statistics for how often each pattern occurs in each class of texts. For each pattern INLINEFORM0 , we collect P(factual INLINEFORM1 INLINEFORM2 ) and P(feeling INLINEFORM3 INLINEFORM4 ), as well as the pattern's overall frequency in the corpus.",
"Since the IAC data set contains a large number of unannotated debate forum posts, we embedd AutoSlog-TS in a bootstrapping framework to learn additional patterns. The flow diagram for the bootstrapping system is shown in Figure FIGREF10 .",
"Initially, we give the labeled training data to AutoSlog-TS, which generates patterns and associated statistics. The next step identifies high-precision patterns that can be used to label some of the unannotated texts as factual or feeling. We define two thresholds: INLINEFORM0 to represent a minimum frequency value, and INLINEFORM1 to represent a minimum probability value. We found that using only a small set of patterns (when INLINEFORM2 is set to a high value) achieves extremely high precision, yet results in a very low recall. Instead, we adopt a strategy of setting a moderate probability threshold to identify reasonably reliable patterns, but labeling a text as factual or feeling only if it contains at least a certain number different patterns for that category, INLINEFORM3 . In order to calibrate the thresholds, we experimented with a range of threshold values on the development (tuning) data and identified INLINEFORM4 =3, INLINEFORM5 =.70, and INLINEFORM6 =3 for the factual class, and INLINEFORM7 =3, INLINEFORM8 =.55, and INLINEFORM9 =3 for the feeling class as having the highest classification precision (with non-trivial recall).",
"The high-precision patterns are then used in the bootstrapping framework to identify more factual and feeling texts from the 11,561 unannotated posts, also from 4forums.com. For each round of bootstrapping, the current set of factual and feeling patterns are matched against the unannotated texts, and posts that match at least 3 patterns associated with a given class are assigned to that class. As shown in Figure FIGREF10 , the Bootstrapped Data Balancer then randomly selects a balanced subset of the newly classified posts to maintain the same proportion of factual vs. feeling documents throughout the bootstrapping process. These new documents are added to the set of labeled documents, and the bootstrapping process repeats. We use the same threshold values to select new high-precision patterns for all iterations."
],
[
"We evaluate the effectiveness of the learned patterns by applying them to the test set of 586 posts (347 fact and 239 feeling posts, maintaining the original ratio of fact to feel data in train). We classify each post as factual or feeling using the same procedure as during bootstrapping: a post is labeled as factual or feeling if it matches at least three high-precision patterns for that category. If a document contains three patterns for both categories, then we leave it unlabeled. We ran the bootstrapping algorithm for four iterations.",
"The upper section of Table TABREF11 shows the Precision and Recall results for the patterns learned during bootstrapping. The Iter 0 row shows the performance of the patterns learned only from the original, annotated training data. The remaining rows show the results for the patterns learned from the unannotated texts during bootstrapping, added cumulatively. We show the results after each iteration of bootstrapping.",
"Table TABREF11 shows that recall increases after each bootstrapping iteration, demonstrating that the patterns learned from the unannotated texts yield substantial gains in coverage over those learned only from the annotated texts. Recall increases from 22.8% to 40.9% for fact, and from 8.0% to 18.8% for feel. The precision for the factual class is reasonably good, but the precision for the feeling class is only moderate. However, although precision typically decreases during boostrapping due to the addition of imperfectly labeled data, the precision drop during bootstrapping is relatively small.",
"We also evaluated the performance of a Naive Bayes (NB) classifier to assess the difficulty of this task with a traditional supervised learning algorithm. We trained a Naive Bayes classifier with unigram features and binary values on the training data, and identified the best Laplace smoothing parameter using the development data. The bottom row of Table TABREF11 shows the results for the NB classifier on the test data. These results show that the NB classifier yields substantially higher recall for both categories, undoubtedly due to the fact that the classifier uses all unigram information available in the text. Our pattern learner, however, was restricted to learning linguistic expressions in specific syntactic constructions, usually requiring more than one word, because our goal was to study specific expressions associated with factual and feeling argument styles. Table TABREF11 shows that the lexico-syntactic patterns did obtain higher precision than the NB classifier, but with lower recall.",
"Table TABREF14 shows the number of patterns learned from the annotated data (Iter 0) and the number of new patterns added after each bootstrapping iteration. The first iteration dramatically increases the set of patterns, and more patterns are steadily added throughout the rest of bootstrapping process.",
"The key take-away from this set of experiments is that distinguishing factual and feeling argumets is clearly a challenging task. There is substantial room for improvement for both precision and recall, and surprisingly, the feeling class seems to be harder to accurately recognize than the factual class. In the next section, we examine the learned patterns and their syntactic forms to better understand the language used in the debate forums."
],
[
"Table TABREF13 provides examples of patterns learned for each class that are characteristic of that class. We observe that patterns associated with factual arguments often include topic-specific terminology, explanatory language, and argument phrases. In contrast, the patterns associated with feeling based arguments are often based on the speaker's own beliefs or claims, perhaps assuming that they themselves are credible BIBREF20 , BIBREF24 , or they involve assessment or evaluations of the arguments of the other speaker BIBREF29 . They are typically also very creative and diverse, which may be why it is hard to get higher accuracies for feeling classification, as shown by Table TABREF11 .",
"Figure FIGREF15 shows the distribution of syntactic forms (templates) among all of the high-precision patterns identified for each class during bootstrapping. The x-axes show the syntactic templates and the y-axes show the percentage of all patterns that had a specific syntactic form. Figure FIGREF15 counts each lexico-syntactic pattern only once, regardless of how many times it occurred in the data set. Figure FIGREF15 counts the number of instances of each lexico-syntactic pattern. For example, Figure FIGREF15 shows that the Adj Noun syntactic form produced 1,400 different patterns, which comprise 22.6% of the distinct patterns learned. Figure FIGREF15 captures the fact that there are 7,170 instances of the Adj Noun patterns, which comprise 17.8% of all patterns instances in the data set.",
"For factual arguments, we see that patterns with prepositional phrases (especially NP Prep) and passive voice verb phrases are more common. Instantiations of NP Prep are illustrated by FC1, FC5, FC8, FC10 in Table TABREF13 . Instantiations of PassVP are illustrated by FC2 and FC4 in Table TABREF13 . For feeling arguments, expressions with adjectives and active voice verb phrases are more common. Almost every high probability pattern for feeling includes an adjective, as illustrated by every pattern except FE8 in Table TABREF13 . Figure FIGREF15 shows that three syntactic forms account for a large proportion of the instances of high-precision patterns in the data: Adj Noun, NP Prep, and ActVP.",
"Next, we further examine the NP Prep patterns since they are so prevalent. Figure FIGREF19 shows the percentages of the most frequently occurring prepositions found in the NP Prep patterns learned for each class. Patterns containing the preposition \"of\" make up the vast majority of prepositional phrases for both the fact and feel classes, but is more common in the fact class. In contrast, we observe that patterns with the preposition “for” are substantially more common in the feel class than the fact class.",
"Table TABREF20 shows examples of learned NP Prep patterns with the preposition \"of\" in the fact class and \"for\" in the feel class. The \"of\" preposition in the factual arguments often attaches to objective terminology. The \"for\" preposition in the feeling-based arguments is commonly used to express advocacy (e.g., demand for) or refer to affected population groups (e.g., treatment for). Interestingly, these phrases are subtle indicators of feeling-based arguments rather than explicit expressions of emotion or sentiment."
],
[
"Related research on argumentation has primarily worked with different genres of argument than found in IAC, such as news articles, weblogs, legal briefs, supreme court summaries, and congressional debates BIBREF2 , BIBREF30 , BIBREF31 , BIBREF0 , BIBREF3 , BIBREF4 . The examples from IAC in Figure FIGREF1 illustrate that natural informal dialogues such as those found in online forums exhibit a much broader range of argumentative styles. Other work has on models of natural informal arguments have focused on stance classification BIBREF32 , BIBREF5 , BIBREF6 , argument summarization BIBREF7 , sarcasm detection BIBREF8 , and identifying the structure of arguments such as main claims and their justifications BIBREF9 , BIBREF10 , BIBREF11 .",
"Other types of language data also typically contains a mixture of subjective and objective sentences, e.g. Wiebe et al. wiebeetal2001a,wiebeetalcl04 found that 44% of sentences in a news corpus were subjective. Our work is also related to research on distinguishing subjective and objective text BIBREF33 , BIBREF34 , BIBREF14 , including bootstrapped pattern learning for subjective/objective sentence classification BIBREF15 . However, prior work has primarily focused on news texts, not argumentation, and the notion of objective language is not exactly the same as factual. Our work also aims to recognize emotional language specifically, rather than all forms of subjective language. There has been substantial work on sentiment and opinion analysis (e.g., BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 , BIBREF39 , BIBREF40 ) and recognition of specific emotions in text BIBREF41 , BIBREF42 , BIBREF43 , BIBREF44 , which could be incorporated in future extensions of our work. We also hope to examine more closely the relationship of this work to previous work aimed at the identification of nasty vs. nice arguments in the IAC BIBREF45 , BIBREF8 ."
],
[
"In this paper, we use observed differences in argumentation styles in online debate forums to extract patterns that are highly correlated with factual and emotional argumentation. From an annotated set of forum post responses, we are able extract high-precision patterns that are associated with the argumentation style classes, and we are then able to use these patterns to get a larger set of indicative patterns using a bootstrapping methodology on a set of unannotated posts.",
"From the learned patterns, we derive some characteristic syntactic forms associated with the fact and feel that we use to discriminate between the classes. We observe distinctions between the way that different arguments are expressed, with respect to the technical and more opinionated terminologies used, which we analyze on the basis of grammatical forms and more direct syntactic patterns, such as the use of different prepositional phrases. Overall, we demonstrate how the learned patterns can be used to more precisely gather similarly-styled argument responses from a pool of unannotated responses, carrying the characteristics of factual and emotional argumentation style.",
"In future work we aim to use these insights about argument structure to produce higher performing classifiers for identifying factual vs. feeling argument styles. We also hope to understand in more detail the relationship between these argument styles and the heurstic routes to persuasion and associated strategies that have been identified in previous work on argumentation and persuasion BIBREF2 , BIBREF0 , BIBREF4 ."
],
[
"This work was funded by NSF Grant IIS-1302668-002 under the Robust Intelligence Program. The collection and annotation of the IAC corpus was supported by an award from NPS-BAA-03 to UCSC and an IARPA Grant under the Social Constructs in Language Program to UCSC by subcontract from the University of Maryland."
]
],
"section_name": [
"Introduction",
"Pattern Learning for Factual and Emotional Arguments",
"Data",
"Bootstrapped Pattern Learning",
"Evaluation",
"Analysis",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"9332f9067a998929f43dc1c31b0718a6f0a6e366",
"f6d82f656548c35901b5ba24e1465e5b97451de4"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"The IAC includes 10,003 Quote-Response (Q-R) pairs with annotations for factual vs. feeling argument style, across a range of topics. Figure FIGREF4 shows the wording of the survey question used to collect the annotations. Fact vs. Feeling was measured as a scalar ranging from -5 to +5, because previous work suggested that taking the means of scalar annotations reduces noise in Mechanical Turk annotations BIBREF26 . Each of the pairs was annotated by 5-7 annotators. For our experiments, we use only the response texts and assign a binary Fact or Feel label to each response: texts with score INLINEFORM0 1 are assigned to the fact class and texts with score INLINEFORM1 -1 are assigned to the feeling class. We did not use the responses with scores between -1 and 1 because they had a very weak Fact/Feeling assessment, which could be attributed to responses either containing aspects of both factual and feeling expression, or neither. The resulting set contains 3,466 fact and 2,382 feeling posts. We randomly partitioned the fact/feel responses into three subsets: a training set with 70% of the data (2,426 fact and 1,667 feeling posts), a development (tuning) set with 20% of the data (693 fact and 476 feeling posts), and a test set with 10% of the data (347 fact and 239 feeling posts). For the bootstrapping method, we also used 11,560 responses from the unannotated data.",
"FLOAT SELECTED: Figure 1: Examples of FACTUAL and FEELING based debate forum Quotes and Responses. Only the responses were labeled for FACT vs. FEEL."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The IAC includes 10,003 Quote-Response (Q-R) pairs with annotations for factual vs. feeling argument style, across a range of topics. Figure FIGREF4 shows the wording of the survey question used to collect the annotations.",
"FLOAT SELECTED: Figure 1: Examples of FACTUAL and FEELING based debate forum Quotes and Responses. Only the responses were labeled for FACT vs. FEEL."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"9942978f5df1eab0b7990b4f30a309e818893fe4",
"ea48e8b8b6aa2268c98877de7ce9dcc2c4b9a20d"
],
"answer": [
{
"evidence": [
"Next, we further examine the NP Prep patterns since they are so prevalent. Figure FIGREF19 shows the percentages of the most frequently occurring prepositions found in the NP Prep patterns learned for each class. Patterns containing the preposition \"of\" make up the vast majority of prepositional phrases for both the fact and feel classes, but is more common in the fact class. In contrast, we observe that patterns with the preposition “for” are substantially more common in the feel class than the fact class."
],
"extractive_spans": [],
"free_form_answer": "Patterns containing the preposition \"of\" make up the vast majority of prepositional phrases for both the fact and feel classes and patterns with the preposition “for” are substantially more common in the feel class than the fact class.",
"highlighted_evidence": [
"Patterns containing the preposition \"of\" make up the vast majority of prepositional phrases for both the fact and feel classes, but is more common in the fact class. In contrast, we observe that patterns with the preposition “for” are substantially more common in the feel class than the fact class."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"From the learned patterns, we derive some characteristic syntactic forms associated with the fact and feel that we use to discriminate between the classes. We observe distinctions between the way that different arguments are expressed, with respect to the technical and more opinionated terminologies used, which we analyze on the basis of grammatical forms and more direct syntactic patterns, such as the use of different prepositional phrases. Overall, we demonstrate how the learned patterns can be used to more precisely gather similarly-styled argument responses from a pool of unannotated responses, carrying the characteristics of factual and emotional argumentation style."
],
"extractive_spans": [
"forms associated with the fact and feel"
],
"free_form_answer": "",
"highlighted_evidence": [
"From the learned patterns, we derive some characteristic syntactic forms associated with the fact and feel that we use to discriminate between the classes."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"966088d71d591acdfcc677acfab0b7e9d98af2a1",
"efae1483147dcef3f470ec02f0e0ce7b646d79e9"
],
"answer": [
{
"evidence": [
"Since the IAC data set contains a large number of unannotated debate forum posts, we embedd AutoSlog-TS in a bootstrapping framework to learn additional patterns. The flow diagram for the bootstrapping system is shown in Figure FIGREF10 .",
"FLOAT SELECTED: Figure 4: Flow Diagram for Bootstrapping Process"
],
"extractive_spans": [
"flow diagram for the bootstrapping system is shown in Figure FIGREF10"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since the IAC data set contains a large number of unannotated debate forum posts, we embedd AutoSlog-TS in a bootstrapping framework to learn additional patterns. The flow diagram for the bootstrapping system is shown in Figure FIGREF10 .",
"FLOAT SELECTED: Figure 4: Flow Diagram for Bootstrapping Process"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To accomplish this, we use the AutoSlog-TS system BIBREF27 to extract linguistic expressions from the annotated texts. Since the IAC also contains a large collection of unannotated texts, we then embed AutoSlog-TS in a bootstrapping framework to learn additional linguistic expressions from the unannotated texts. First, we briefly describe the AutoSlog-TS pattern learner and the set of pattern templates that we used. Then, we present the bootstrapping process to learn more Fact/Feeling patterns from unannotated texts.",
"Initially, we give the labeled training data to AutoSlog-TS, which generates patterns and associated statistics. The next step identifies high-precision patterns that can be used to label some of the unannotated texts as factual or feeling. We define two thresholds: INLINEFORM0 to represent a minimum frequency value, and INLINEFORM1 to represent a minimum probability value. We found that using only a small set of patterns (when INLINEFORM2 is set to a high value) achieves extremely high precision, yet results in a very low recall. Instead, we adopt a strategy of setting a moderate probability threshold to identify reasonably reliable patterns, but labeling a text as factual or feeling only if it contains at least a certain number different patterns for that category, INLINEFORM3 . In order to calibrate the thresholds, we experimented with a range of threshold values on the development (tuning) data and identified INLINEFORM4 =3, INLINEFORM5 =.70, and INLINEFORM6 =3 for the factual class, and INLINEFORM7 =3, INLINEFORM8 =.55, and INLINEFORM9 =3 for the feeling class as having the highest classification precision (with non-trivial recall)."
],
"extractive_spans": [],
"free_form_answer": "They embed AutoSlog-TS in a bootstrapping framework to learn additional linguistic expressions from the unannotated texts - they give the labeled training data to AutoSlog-TS, which generates patterns and associated statistics and then identifies high-precision patterns that can be used to label some of the unannotated texts as factual or feeling.",
"highlighted_evidence": [
" Since the IAC also contains a large collection of unannotated texts, we then embed AutoSlog-TS in a bootstrapping framework to learn additional linguistic expressions from the unannotated texts.",
"Initially, we give the labeled training data to AutoSlog-TS, which generates patterns and associated statistics. The next step identifies high-precision patterns that can be used to label some of the unannotated texts as factual or feeling. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"53e1cb7f3633935ed774c433fec38fb013b415af",
"c7c58e638fad89a0fa915d3ab22e982a7fd7780d"
],
"answer": [
{
"evidence": [
"Table TABREF20 shows examples of learned NP Prep patterns with the preposition \"of\" in the fact class and \"for\" in the feel class. The \"of\" preposition in the factual arguments often attaches to objective terminology. The \"for\" preposition in the feeling-based arguments is commonly used to express advocacy (e.g., demand for) or refer to affected population groups (e.g., treatment for). Interestingly, these phrases are subtle indicators of feeling-based arguments rather than explicit expressions of emotion or sentiment.",
"FLOAT SELECTED: Table 4: High-Probability FACT Phrases with “OF” and FEEL Phrases with “FOR”"
],
"extractive_spans": [],
"free_form_answer": "Examples of extracted patters with high probability that contain of are: MARRIAGE FOR, STANDING FOR, SAME FOR, TREATMENT FOR, DEMAND FOR, ATTENTION FOR, ADVOCATE FOR, NO EVIDENCE FOR, JUSTIFICATION FOR, EXCUSE FOR",
"highlighted_evidence": [
"Table TABREF20 shows examples of learned NP Prep patterns with the preposition \"of\" in the fact class and \"for\" in the feel class",
"FLOAT SELECTED: Table 4: High-Probability FACT Phrases with “OF” and FEEL Phrases with “FOR”"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF13 provides examples of patterns learned for each class that are characteristic of that class. We observe that patterns associated with factual arguments often include topic-specific terminology, explanatory language, and argument phrases. In contrast, the patterns associated with feeling based arguments are often based on the speaker's own beliefs or claims, perhaps assuming that they themselves are credible BIBREF20 , BIBREF24 , or they involve assessment or evaluations of the arguments of the other speaker BIBREF29 . They are typically also very creative and diverse, which may be why it is hard to get higher accuracies for feeling classification, as shown by Table TABREF11 ."
],
"extractive_spans": [],
"free_form_answer": "Pattrn based on the speaker's own beliefs or claims, perhaps assuming that they themselves are credible or they involve assessment or evaluations of the arguments of the other speaker. They are typically also very creative and diverse.",
"highlighted_evidence": [
" In contrast, the patterns associated with feeling based arguments are often based on the speaker's own beliefs or claims, perhaps assuming that they themselves are credible BIBREF20 , BIBREF24 , or they involve assessment or evaluations of the arguments of the other speaker BIBREF29 . They are typically also very creative and diverse, which may be why it is hard to get higher accuracies for feeling classification, as shown by Table TABREF11 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"3f6eac37ade15390fb367556d2b37587c070f71d",
"ec93ae312db7bc39264f6a6df374be9471ea2cf0"
],
"answer": [
{
"evidence": [
"Table TABREF13 provides examples of patterns learned for each class that are characteristic of that class. We observe that patterns associated with factual arguments often include topic-specific terminology, explanatory language, and argument phrases. In contrast, the patterns associated with feeling based arguments are often based on the speaker's own beliefs or claims, perhaps assuming that they themselves are credible BIBREF20 , BIBREF24 , or they involve assessment or evaluations of the arguments of the other speaker BIBREF29 . They are typically also very creative and diverse, which may be why it is hard to get higher accuracies for feeling classification, as shown by Table TABREF11 ."
],
"extractive_spans": [
" patterns associated with factual arguments often include topic-specific terminology, explanatory language, and argument phrases"
],
"free_form_answer": "",
"highlighted_evidence": [
" We observe that patterns associated with factual arguments often include topic-specific terminology, explanatory language, and argument phrases."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: High-Probability FACT Phrases with “OF” and FEEL Phrases with “FOR”",
"Table TABREF20 shows examples of learned NP Prep patterns with the preposition \"of\" in the fact class and \"for\" in the feel class. The \"of\" preposition in the factual arguments often attaches to objective terminology. The \"for\" preposition in the feeling-based arguments is commonly used to express advocacy (e.g., demand for) or refer to affected population groups (e.g., treatment for). Interestingly, these phrases are subtle indicators of feeling-based arguments rather than explicit expressions of emotion or sentiment."
],
"extractive_spans": [],
"free_form_answer": "Examples of extracted patters with high probability that correlate with factual argument are: RESULT OF, ORIGIN OF, THEORY OF, EVIDENCE OF, PARTS OF, EVOLUTION OF, PERCENT OF, THOUSANDS OF, EXAMPLE OF, LAW OF",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: High-Probability FACT Phrases with “OF” and FEEL Phrases with “FOR”",
"Table TABREF20 shows examples of learned NP Prep patterns with the preposition \"of\" in the fact class and \"for\" in the feel class."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"09bf11867be4db5644a8e3742bdc6c3f070b9008",
"cfa039ce379f5b71a1afbaf46336cb3bd7e7d1f7"
],
"answer": [
{
"evidence": [
"The IAC includes 10,003 Quote-Response (Q-R) pairs with annotations for factual vs. feeling argument style, across a range of topics. Figure FIGREF4 shows the wording of the survey question used to collect the annotations. Fact vs. Feeling was measured as a scalar ranging from -5 to +5, because previous work suggested that taking the means of scalar annotations reduces noise in Mechanical Turk annotations BIBREF26 . Each of the pairs was annotated by 5-7 annotators. For our experiments, we use only the response texts and assign a binary Fact or Feel label to each response: texts with score INLINEFORM0 1 are assigned to the fact class and texts with score INLINEFORM1 -1 are assigned to the feeling class. We did not use the responses with scores between -1 and 1 because they had a very weak Fact/Feeling assessment, which could be attributed to responses either containing aspects of both factual and feeling expression, or neither. The resulting set contains 3,466 fact and 2,382 feeling posts. We randomly partitioned the fact/feel responses into three subsets: a training set with 70% of the data (2,426 fact and 1,667 feeling posts), a development (tuning) set with 20% of the data (693 fact and 476 feeling posts), and a test set with 10% of the data (347 fact and 239 feeling posts). For the bootstrapping method, we also used 11,560 responses from the unannotated data."
],
"extractive_spans": [
"binary Fact or Feel label to each response: texts with score INLINEFORM0 1 are assigned to the fact class and texts with score INLINEFORM1 -1 are assigned to the feeling class."
],
"free_form_answer": "",
"highlighted_evidence": [
"For our experiments, we use only the response texts and assign a binary Fact or Feel label to each response: texts with score INLINEFORM0 1 are assigned to the fact class and texts with score INLINEFORM1 -1 are assigned to the feeling class. We did not use the responses with scores between -1 and 1 because they had a very weak Fact/Feeling assessment, which could be attributed to responses either containing aspects of both factual and feeling expression, or neither."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our research examines factual versus feeling argument styles, drawing on annotations provided in the Internet Argument Corpus (IAC) BIBREF6 . This corpus includes quote-response pairs that were manually annotated with respect to whether the response is primarily a factual or feeling based argument, as Section SECREF2 describes in more detail. Figure FIGREF1 provides examples of responses in the IAC (paired with preceding quotes to provide context), along with the response's factual vs. feeling label."
],
"extractive_spans": [
"manually"
],
"free_form_answer": "",
"highlighted_evidence": [
"This corpus includes quote-response pairs that were manually annotated with respect to whether the response is primarily a factual or feeling based argument, as Section SECREF2 describes in more detail."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What are the most discriminating patterns which are analyzed?",
"What bootstrapping methodology was used to find new patterns?",
"What patterns were extracted which were correlated with emotional arguments?",
"What patterns were extracted which were correlated with factual arguments?",
"How were the factual and feeling forum posts annotated?"
],
"question_id": [
"74cc0300e22f60232812019011a09df92bbec803",
"865811dcf63a1dd3f22c62ec39ffbca4b182de31",
"9e378361b6462034aaf752adf04595ef56370b86",
"667dce60255d8ab959869eaf8671312df8c0004b",
"d5e716c1386b6485e63075e980f80d44564d0aa2",
"1fd31fdfff93d65f36e93f6919f6976f5f172197"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Examples of FACTUAL and FEELING based debate forum Quotes and Responses. Only the responses were labeled for FACT vs. FEEL.",
"Figure 3: The Pattern Templates of AutoSlog-TS with Example Instantiations",
"Figure 4: Flow Diagram for Bootstrapping Process",
"Table 1: Evaluation Results",
"Table 2: Examples of Characteristic Argumentation Style Patterns for Each Class",
"Table 3: Number of New Patterns Added after Each Round of Bootstrapping",
"Figure 5: Histograms of Syntactic Forms by Percentage of Total",
"Figure 6: Percentage of Preposition Types in the NP Prep Patterns",
"Table 4: High-Probability FACT Phrases with “OF” and FEEL Phrases with “FOR”"
],
"file": [
"2-Figure1-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Figure5-1.png",
"7-Figure6-1.png",
"8-Table4-1.png"
]
} | [
"What are the most discriminating patterns which are analyzed?",
"What bootstrapping methodology was used to find new patterns?",
"What patterns were extracted which were correlated with emotional arguments?",
"What patterns were extracted which were correlated with factual arguments?"
] | [
[
"1709.05295-Analysis-3",
"1709.05295-Conclusion-1"
],
[
"1709.05295-Bootstrapped Pattern Learning-1",
"1709.05295-Bootstrapped Pattern Learning-7",
"1709.05295-4-Figure4-1.png",
"1709.05295-Bootstrapped Pattern Learning-6"
],
[
"1709.05295-Analysis-0",
"1709.05295-Analysis-4",
"1709.05295-8-Table4-1.png"
],
[
"1709.05295-Analysis-4",
"1709.05295-Analysis-0",
"1709.05295-8-Table4-1.png"
]
] | [
"Patterns containing the preposition \"of\" make up the vast majority of prepositional phrases for both the fact and feel classes and patterns with the preposition “for” are substantially more common in the feel class than the fact class.",
"They embed AutoSlog-TS in a bootstrapping framework to learn additional linguistic expressions from the unannotated texts - they give the labeled training data to AutoSlog-TS, which generates patterns and associated statistics and then identifies high-precision patterns that can be used to label some of the unannotated texts as factual or feeling.",
"Pattrn based on the speaker's own beliefs or claims, perhaps assuming that they themselves are credible or they involve assessment or evaluations of the arguments of the other speaker. They are typically also very creative and diverse.",
"Examples of extracted patters with high probability that correlate with factual argument are: RESULT OF, ORIGIN OF, THEORY OF, EVIDENCE OF, PARTS OF, EVOLUTION OF, PERCENT OF, THOUSANDS OF, EXAMPLE OF, LAW OF"
] | 280 |
1906.05685 | A Focus on Neural Machine Translation for African Languages | African languages are numerous, complex and low-resourced. The datasets required for machine translation are difficult to discover, and existing research is hard to reproduce. Minimal attention has been given to machine translation for African languages so there is scant research regarding the problems that arise when using machine translation techniques. To begin addressing these problems, we trained models to translate English to five of the official South African languages (Afrikaans, isiZulu, Northern Sotho, Setswana, Xitsonga), making use of modern neural machine translation techniques. The results obtained show the promise of using neural machine translation techniques for African languages. By providing reproducible publicly-available data, code and results, this research aims to provide a starting point for other researchers in African machine translation to compare to and build upon. | {
"paragraphs": [
[
"Africa has over 2000 languages across the continent BIBREF0 . South Africa itself has 11 official languages. Unlike many major Western languages, the multitude of African languages are very low-resourced and the few resources that exist are often scattered and difficult to obtain.",
"Machine translation of African languages would not only enable the preservation of such languages, but also empower African citizens to contribute to and learn from global scientific, social and educational conversations, which are currently predominantly English-based BIBREF1 . Tools, such as Google Translate BIBREF2 , support a subset of the official South African languages, namely English, Afrikaans, isiZulu, isiXhosa and Southern Sotho, but do not translate the remaining six official languages.",
"Unfortunately, in addition to being low-resourced, progress in machine translation of African languages has suffered a number of problems. This paper discusses the problems and reviews existing machine translation research for African languages which demonstrate those problems. To try to solve the highlighted problems, we train models to perform machine translation of English to Afrikaans, isiZulu, Northern Sotho (N. Sotho), Setswana and Xitsonga, using state-of-the-art neural machine translation (NMT) architectures, namely, the Convolutional Sequence-to-Sequence (ConvS2S) and Transformer architectures.",
"Section SECREF2 describes the problems facing machine translation for African languages, while the target languages are described in Section SECREF3 . Related work is presented in Section SECREF4 , and the methodology for training machine translation models is discussed in Section SECREF5 . Section SECREF6 presents quantitative and qualitative results."
],
[
"The difficulties hindering the progress of machine translation of African languages are discussed below.",
"Low availability of resources for African languages hinders the ability for researchers to do machine translation. Institutes such as the South African Centre for Digital Language Resources (SADiLaR) are attempting to change that by providing an open platform for technologies and resources for South African languages BIBREF7 . This, however, only addresses the 11 official languages of South Africa and not the greater problems within Africa.",
"Discoverability: The resources for African languages that do exist are hard to find. Often one needs to be associated with a specific academic institution in a specific country to gain access to the language data available for that country. This reduces the ability of countries and institutions to combine their knowledge and datasets to achieve better performance and innovations. Often the existing research itself is hard to discover since they are often published in smaller African conferences or journals, which are not electronically available nor indexed by research tools such as Google Scholar.",
"Reproducibility: The data and code of existing research are rarely shared, which means researchers cannot reproduce the results properly. Examples of papers that do not publicly provide their data and code are described in Section SECREF4 .",
"Focus: According to BIBREF8 , African society does not see hope for indigenous languages to be accepted as a more primary mode for communication. As a result, there are few efforts to fund and focus on translation of these languages, despite their potential impact.",
"Lack of benchmarks: Due to the low discoverability and the lack of research in the field, there are no publicly available benchmarks or leader boards to new compare machine translation techniques to.",
"This paper aims to address some of the above problems as follows: We trained models to translate English to Afrikaans, isiZulu, N. Sotho, Setswana and Xitsonga, using modern NMT techniques. We have published the code, datasets and results for the above experiments on GitHub, and in doing so promote reproducibility, ensure discoverability and create a baseline leader board for the five languages, to begin to address the lack of benchmarks."
],
[
"We provide a brief description of the Southern African languages addressed in this paper, since many readers may not be familiar with them. The isiZulu, N. Sotho, Setswana, and Xitsonga languages belong to the Southern Bantu group of African languages BIBREF9 . The Bantu languages are agglutinative and all exhibit a rich noun class system, subject-verb-object word order, and tone BIBREF10 . N. Sotho and Setswana are closely related and are highly mutually-intelligible. Xitsonga is a language of the Vatsonga people, originating in Mozambique BIBREF11 . The language of isiZulu is the second most spoken language in Southern Africa, belongs to the Nguni language family, and is known for its morphological complexity BIBREF12 , BIBREF13 . Afrikaans is an analytic West-Germanic language, that descended from Dutch settlers BIBREF14 ."
],
[
"This section details published research for machine translation for the South African languages. The existing research is technically incomparable to results published in this paper, because their datasets (in particular their test sets) are not published. Table TABREF1 shows the BLEU scores provided by the existing work.",
"Google Translate BIBREF2 , as of February 2019, provides translations for English, Afrikaans, isiZulu, isiXhosa and Southern Sotho, six of the official South African languages. Google Translate was tested with the Afrikaans and isiZulu test sets used in this paper to determine its performance. However, due to the uncertainty regarding how Google Translate was trained, and which data it was trained on, there is a possibility that the system was trained on the test set used in this study as this test set was created from publicly available governmental data. For this reason, we determined this system is not comparable to this paper's models for isiZulu and Afrikaans.",
" BIBREF3 trained Transformer models for English to Setswana on the parallel Autshumato dataset BIBREF15 . Data was not cleaned nor was any additional data used. This is the only study reviewed that released datasets and code. BIBREF4 performed statistical phrase-based translation for English to Setswana translation. This research used linguistically-motivated pre- and post-processing of the corpus in order to improve the translations. The system was trained on the Autshumato dataset and also used an additional monolingual dataset.",
" BIBREF5 used statistical machine translation for English to Xitsonga translation. The models were trained on the Autshumato data, as well as a large monolingual corpus. A factored machine translation system was used, making use of a combination of lemmas and part of speech tags.",
" BIBREF6 used unsupervised word segmentation with phrase-based statistical machine translation models. These models translate from English to Afrikaans, N. Sotho, Xitsonga and isiZulu. The parallel corpora were created by crawling online sources and official government data and aligning these sentences using the HunAlign software package. Large monolingual datasets were also used.",
" BIBREF16 performed word translation for English to isiZulu. The translation system was trained on a combination of Autshumato, Bible, and data obtained from the South African Constitution. All of the isiZulu text was syllabified prior to the training of the word translation system.",
"It is evident that there is exceptionally little research available using machine translation techniques for Southern African languages. Only one of the mentioned studies provide code and datasets for their results. As a result, the BLEU scores obtained in this paper are technically incomparable to those obtained in past papers."
],
[
"The following section describes the methodology used to train the machine translation models for each language. Section SECREF4 describes the datasets used for training and their preparation, while the algorithms used are described in Section SECREF8 ."
],
[
"The publicly-available Autshumato parallel corpora are aligned corpora of South African governmental data which were created for use in machine translation systems BIBREF15 . The datasets are available for download at the South African Centre for Digital Language Resources website. The datasets were created as part of the Autshumato project which aims to provide access to data to aid in the development of open-source translation systems in South Africa.",
"The Autshumato project provides parallel corpora for English to Afrikaans, isiZulu, N. Sotho, Setswana, and Xitsonga. These parallel corpora were aligned on the sentence level through a combination of automatic and manual alignment techniques.",
"The official Autshumato datasets contain many duplicates, therefore to avoid data leakage between training, development and test sets, all duplicate sentences were removed. These clean datasets were then split into 70% for training, 30% for validation, and 3000 parallel sentences set aside for testing. Summary statistics for each dataset are shown in Table TABREF2 , highlighting how small each dataset is.",
"Even though the datasets were cleaned for duplicate sentences, further issues exist within the datasets which negatively affects models trained with this data. In particular, the isiZulu dataset is of low quality. Examples of issues found in the isiZulu dataset are explained in Table TABREF3 . The source and target sentences are provided from the dataset, the back translation from the target to the source sentence is given, and the issue pertaining to the translation is explained."
],
[
"We trained translation models for two established NMT architectures for each language, namely, ConvS2S and Transformer. As the purpose of this work is to provide a baseline benchmark, we have not performed significant hyperparameter optimization, and have left that as future work.",
"The Fairseq(-py) toolkit was used to model the ConvS2S model BIBREF17 . Fairseq's named architecture “fconv” was used, with the default hyperparameters recommended by Fairseq documentation as follows: The learning rate was set to 0.25, a dropout of 0.2, and the maximum tokens for each mini-batch was set to 4000. The dataset was preprocessed using Fairseq's preprocess script to build the vocabularies and to binarize the dataset. To decode the test data, beam search was used, with a beam width of 5. For each language, a model was trained using traditional white-space tokenisation, as well as byte-pair encoding tokenisation (BPE). To appropriately select the number of tokens for BPE, for each target language, we performed an ablation study (described in Section SECREF25 ).",
"The Tensor2Tensor implementation of Transformer was used BIBREF18 . The models were trained on a Google TPU, using Tensor2Tensor's recommended parameters for training, namely, a batch size of 2048, an Adafactor optimizer with learning rate warm-up of 10K steps, and a max sequence length of 64. The model was trained for 125K steps. Each dataset was encoded using the Tensor2Tensor data generation algorithm which invertibly encodes a native string as a sequence of subtokens, using WordPiece, an algorithm similar to BPE BIBREF19 . Beam search was used to decode the test data, with a beam width of 4."
],
[
"Section SECREF9 describes the quantitative performance of the models by comparing BLEU scores, while a qualitative analysis is performed in Section SECREF10 by analysing translated sentences as well as attention maps. Section SECREF25 provides the results for an ablation study done regarding the effects of BPE."
],
[
"The BLEU scores for each target language for both the ConvS2S and the Transformer models are reported in Table TABREF7 . For the ConvS2S model, we provide results for sentences tokenised by white spaces (Word), and when tokenised using the optimal number of BPE tokens (Best BPE), as determined in Section SECREF25 . The Transformer model uses the same number of WordPiece tokens as the number of BPE tokens which was deemed optimal during the BPE ablation study done on the ConvS2S model.",
"In general, the Transformer model outperformed the ConvS2S model for all of the languages, sometimes achieving 10 BLEU points or more over the ConvS2S models. The results also show that the translations using BPE tokenisation outperformed translations using standard word-based tokenisation. The relative performance of Transformer to ConvS2S models agrees with what has been seen in existing NMT literature BIBREF20 . This is also the case when using BPE tokenisation as compared to standard word-based tokenisation techniques BIBREF21 .",
"Overall, we notice that the performance of the NMT techniques on a specific target language is related to both the number of parallel sentences and the morphological typology of the language. In particular, isiZulu, N. Sotho, Setswana, and Xitsonga languages are all agglutinative languages, making them harder to translate, especially with very little data BIBREF22 . Afrikaans is not agglutinative, thus despite having less than half the number of parallel sentences as Xitsonga and Setswana, the Transformer model still achieves reasonable performance. Xitsonga and Setswana are both agglutinative, but have significantly more data, so their models achieve much higher performance than N. Sotho or isiZulu.",
"The translation models for isiZulu achieved the worst performance when compared to the others, with the maximum BLEU score of 3.33. We attribute the bad performance to the morphological complexity of the language (as discussed in Section SECREF3 ), the very small size of the dataset as well as the poor quality of the data (as discussed in Section SECREF4 )."
],
[
"We examine randomly sampled sentences from the test set for each language and translate them using the trained models. In order for readers to understand the accuracy of the translations, we provide back-translations of the generated translation to English. These back-translations were performed by a speaker of the specific target language. More examples of the translations are provided in the Appendix. Additionally, attention visualizations are provided for particular translations. The attention visualizations showed how the Transformer multi-head attention captured certain syntactic rules of the target languages.",
"In Table TABREF20 , ConvS2S did not perform the translation successfully. Despite the content being related to the topic of the original sentence, the semantics did not carry. On the other hand, Transformer achieved an accurate translation. Interestingly, the target sentence used an abbreviation, however, both translations did not. This is an example of how lazy target translations in the original dataset would negatively affect the BLEU score, and implore further improvement to the datasets. We plot an attention map to demonstrate the success of Transformer to learn the English-to-Afrikaans sentence structure in Figure FIGREF12 .",
"Despite the bad performance of the English-to-isiZulu models, we wanted to understand how they were performing. The translated sentences, given in Table TABREF21 , do not make sense, but all of the words are valid isiZulu words. Interestingly, the ConvS2S translation uses English words in the translation, perhaps due to English data occurring in the isiZulu dataset. The ConvS2S however correctly prefixed the English phrase with the correct prefix “i-\". The Transformer translation includes invalid acronyms and mentions “disease\" which is not in the source sentence.",
"If we examine Table TABREF22 , the ConvS2S model struggled to translate the sentence and had many repeating phrases. Given that the sentence provided is a difficult one to translate, this is not surprising. The Transformer model translated the sentence well, except included the word “boithabišo”, which in this context can be translated to “fun” - a concept that was not present in the original sentence.",
"Table TABREF23 shows that the ConvS2S model translated the sentence very successfully. The word “khumo” directly means “wealth” or “riches”. A better synonym would be “letseno”, meaning income or “letlotlo” which means monetary assets. The Transformer model only had a single misused word (translated “shortage” into “necessity”), but otherwise translated successfully. The attention map visualization in Figure FIGREF18 suggests that the attention mechanism has learnt that the sentence structure of Setswana is the same as English.",
"An examination of Table TABREF24 shows that both models perform well translating the given sentence. However, the ConvS2S model had a slight semantic failure where the cause of the economic growth was attributed to unemployment, rather than vice versa."
],
[
"BPE BIBREF21 and its variants, such as SentencePiece BIBREF19 , aid translation of rare words in NMT systems. However, the choice of the number of tokens to generate for any particular language is not made obvious by literature. Popular choices for the number of tokens are between 30,000 and 40,000: BIBREF20 use 37,000 for WMT 2014 English-to-German translation task and 32,000 tokens for the WMT 2014 English-to-French translation task. BIBREF23 used 32,000 SentencePiece tokens across all source and target data. Unfortunately, no motivation for the choice for the number of tokens used when creating sub-words has been provided.",
"Initial experimentation suggested that the choice of the number of tokens used when running BPE tokenisation, affected the model's final performance significantly. In order to obtain the best results for the given datasets and models, we performed an ablation study, using subword-nmt BIBREF21 , over the number of tokens required by BPE, for each language, on the ConvS2S model. The results of the ablation study are shown in Figure FIGREF26 .",
"As can be seen in Figure FIGREF26 , the models for languages with the smallest datasets (namely isiZulu and N. Sotho) achieve higher BLEU scores when the number of BPE tokens is smaller, and decrease as the number of BPE tokens increases. In contrast, the performance of the models for languages with larger datasets (namely Setswana, Xitsonga, and Afrikaans) improves as the number of BPE tokens increases. There is a decrease in performance at 20 000 BPE tokens for Setswana and Afrikaans, which the authors cannot yet explain and require further investigation. The optimal number of BPE tokens were used for each language, as indicated in Table TABREF7 ."
],
[
"Future work involves improving the current datasets, specifically the isiZulu dataset, and thus improving the performance of the current machine translation models.",
"As this paper only provides translation models for English to five of the South African languages and Google Translate provides translation for an additional two languages, further work needs to be done to provide translation for all 11 official languages. This would require performing data collection and incorporating unsupervised BIBREF24 , BIBREF25 , meta-learning BIBREF26 , or zero-shot techniques BIBREF23 ."
],
[
"African languages are numerous and low-resourced. Existing datasets and research for machine translation are difficult to discover, and the research hard to reproduce. Additionally, very little attention has been given to the African languages so no benchmarks or leader boards exist, and few attempts at using popular NMT techniques exist for translating African languages.",
"This paper reviewed existing research in machine translation for South African languages and highlighted their problems of discoverability and reproducibility. In order to begin addressing these problems, we trained models to translate English to five South African languages, using modern NMT techniques, namely ConvS2S and Transformer. The results were promising for the languages that have more higher quality data (Xitsonga, Setswana, Afrikaans), while there is still extensive work to be done for isiZulu and N. Sotho which have exceptionally little data and the data is of worse quality. Additionally, an ablation study over the number of BPE tokens was performed for each language. Given that all data and code for the experiments are published on GitHub, these benchmarks provide a starting point for other researchers to find, compare and build upon.",
"The source code and the data used are available at https://github.com/LauraMartinus/ukuxhumana."
],
[
"The authors would like to thank Reinhard Cromhout, Guy Bosa, Mbongiseni Ncube, Seale Rapolai, and Vongani Maluleke for assisting us with the back-translations, and Jason Webster for Google Translate API assistance. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)."
],
[
"Additional translation results from ConvS2S and Transformer are given in Table TABREF27 along with their back-translations for Afrikaans, N. Sotho, Setswana, and Xitsonga. We include these additional sentences as we feel that the single sentence provided per language in Section SECREF10 , is not enough demonstrate the capabilities of the models. Given the scarcity of research in this field, researchers might find the additional sentences insightful into understanding the real-world capabilities and potential, even if BLEU scores are low."
]
],
"section_name": [
"Introduction",
"Problems",
"Languages",
"Related Work",
"Methodology",
"Data",
"Algorithms",
"Results",
"Quantitative Results",
"Qualitative Results",
"Ablation Study over the Number of Tokens for Byte-pair Encoding",
"Future Work",
"Conclusion",
"Acknowledgements",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"3db055f12b05ae9feb4f2a548b0d0ef495525cd8",
"685a8ca9a0f44784bae2b673f1c26e944ba9ae65"
],
"answer": [
{
"evidence": [
"Section SECREF9 describes the quantitative performance of the models by comparing BLEU scores, while a qualitative analysis is performed in Section SECREF10 by analysing translated sentences as well as attention maps. Section SECREF25 provides the results for an ablation study done regarding the effects of BPE."
],
"extractive_spans": [
"BLEU"
],
"free_form_answer": "",
"highlighted_evidence": [
"Section SECREF9 describes the quantitative performance of the models by comparing BLEU scores, while a qualitative analysis is performed in Section SECREF10 by analysing translated sentences as well as attention maps."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Section SECREF9 describes the quantitative performance of the models by comparing BLEU scores, while a qualitative analysis is performed in Section SECREF10 by analysing translated sentences as well as attention maps. Section SECREF25 provides the results for an ablation study done regarding the effects of BPE."
],
"extractive_spans": [
"BLEU"
],
"free_form_answer": "",
"highlighted_evidence": [
"Section SECREF9 describes the quantitative performance of the models by comparing BLEU scores, while a qualitative analysis is performed in Section SECREF10 by analysing translated sentences as well as attention maps."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"496c366d1071a8bde077789ee718552218438a30",
"778dc5aab6c5d8692d79720c51993f4e4f78c8b1"
],
"answer": [
{
"evidence": [
"We trained translation models for two established NMT architectures for each language, namely, ConvS2S and Transformer. As the purpose of this work is to provide a baseline benchmark, we have not performed significant hyperparameter optimization, and have left that as future work."
],
"extractive_spans": [
"ConvS2S",
"Transformer"
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained translation models for two established NMT architectures for each language, namely, ConvS2S and Transformer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We trained translation models for two established NMT architectures for each language, namely, ConvS2S and Transformer. As the purpose of this work is to provide a baseline benchmark, we have not performed significant hyperparameter optimization, and have left that as future work."
],
"extractive_spans": [
"ConvS2S",
"Transformer"
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained translation models for two established NMT architectures for each language, namely, ConvS2S and Transformer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"75d7f29dca4fecf6c8130b9d7962b7cfd4ea6fe0",
"8659f1f960447f827fa1ef8d5c0de446b976fcec"
],
"answer": [
{
"evidence": [
"In general, the Transformer model outperformed the ConvS2S model for all of the languages, sometimes achieving 10 BLEU points or more over the ConvS2S models. The results also show that the translations using BPE tokenisation outperformed translations using standard word-based tokenisation. The relative performance of Transformer to ConvS2S models agrees with what has been seen in existing NMT literature BIBREF20 . This is also the case when using BPE tokenisation as compared to standard word-based tokenisation techniques BIBREF21 ."
],
"extractive_spans": [
"Transformer"
],
"free_form_answer": "",
"highlighted_evidence": [
"In general, the Transformer model outperformed the ConvS2S model for all of the languages, sometimes achieving 10 BLEU points or more over the ConvS2S models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In general, the Transformer model outperformed the ConvS2S model for all of the languages, sometimes achieving 10 BLEU points or more over the ConvS2S models. The results also show that the translations using BPE tokenisation outperformed translations using standard word-based tokenisation. The relative performance of Transformer to ConvS2S models agrees with what has been seen in existing NMT literature BIBREF20 . This is also the case when using BPE tokenisation as compared to standard word-based tokenisation techniques BIBREF21 ."
],
"extractive_spans": [
"Transformer"
],
"free_form_answer": "",
"highlighted_evidence": [
"n general, the Transformer model outperformed the ConvS2S model for all of the languages, sometimes achieving 10 BLEU points or more over the ConvS2S models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"09ce74078cfd589dbbc550f9098d2797deb1e340",
"d1a2ad43500e77c36c6453ffc6a12d64715fc6c1"
],
"answer": [
{
"evidence": [
"The Autshumato project provides parallel corpora for English to Afrikaans, isiZulu, N. Sotho, Setswana, and Xitsonga. These parallel corpora were aligned on the sentence level through a combination of automatic and manual alignment techniques."
],
"extractive_spans": [],
"free_form_answer": "English to Afrikaans, isiZulu, N. Sotho,\nSetswana, and Xitsonga parallel corpora from the Autshumato project",
"highlighted_evidence": [
"The Autshumato project provides parallel corpora for English to Afrikaans, isiZulu, N. Sotho, Setswana, and Xitsonga."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The publicly-available Autshumato parallel corpora are aligned corpora of South African governmental data which were created for use in machine translation systems BIBREF15 . The datasets are available for download at the South African Centre for Digital Language Resources website. The datasets were created as part of the Autshumato project which aims to provide access to data to aid in the development of open-source translation systems in South Africa."
],
"extractive_spans": [
"Autshumato"
],
"free_form_answer": "",
"highlighted_evidence": [
"The publicly-available Autshumato parallel corpora are aligned corpora of South African governmental data which were created for use in machine translation systems BIBREF15 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What evaluation metrics did they use?",
"What NMT techniques did they explore?",
"What was their best performing model?",
"What datasets did they use?"
],
"question_id": [
"d2d9c7177728987d9e8b0c44549bbe03c8c00ef2",
"6657ece018b1455035421b822ea2d7961557c645",
"175cddfd0bcd77b7327b62f99e57d8ea93f8d8ba",
"f0afc116809b70528226d37190e8e79e1e9cd11e"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: BLEU scores for English-to-Target language translation for related work.",
"Table 2: Summary statistics for each dataset.",
"Table 3: Examples of issues pertaining to the isiZulu dataset.",
"Table 4: BLEU scores calculated for each model, for English-to-Target language translations on test sets.",
"Figure 1: Visualizations of multi-head attention for an English sentence translated to Afrikaans using the Transformer model.",
"Table 5: English to Afrikaans Translations: For the source sentence we show the reference translation, and the translations by the various models. We also show the translation of the results back to English, performed by an Afrikaans speaker.",
"Table 6: English to isiZulu Translations: For the source sentence, we show the reference translation, and the translations by the various models. We also show the translation of the results back to English, performed by a isiZulu speaker.",
"Table 9: English to Xitsonga Translations: For the source sentence we show the reference translation, and the translations by the various models. We also show the translation of the results back to English, performed by a Xitsonga speaker.",
"Table 7: English to Northern Sotho Translations: For the source sentence we show the reference translation, and the translations by the various models. We also show the translation of the results back to English, performed by a Northern Sotho speaker.",
"Table 8: English to Setswana Translations: For the source sentence we show the reference translation, and the translations by the various models. We also show the translation of the results back to English, performed by a Setswana speaker.",
"Figure 2: Visualization of multi-head attention for Layer 5 for the word “concerned”. “Concerned” translates to “tshwenyegile” while “gore” is a connecting word like “that”.",
"Figure 3: The BLEU scores for the ConvS2S of each target language w.r.t the number of BPE tokens.",
"Table 10: For each source sentence we show the reference translation, and the translations by the various models. We also show the translation of the results back to English, performed by a home-language speaker."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"5-Table4-1.png",
"6-Figure1-1.png",
"6-Table5-1.png",
"6-Table6-1.png",
"7-Table9-1.png",
"7-Table7-1.png",
"7-Table8-1.png",
"8-Figure2-1.png",
"8-Figure3-1.png",
"11-Table10-1.png"
]
} | [
"What datasets did they use?"
] | [
[
"1906.05685-Data-1",
"1906.05685-Data-0"
]
] | [
"English to Afrikaans, isiZulu, N. Sotho,\nSetswana, and Xitsonga parallel corpora from the Autshumato project"
] | 281 |
1707.02892 | A Generalized Recurrent Neural Architecture for Text Classification with Multi-Task Learning | Multi-task learning leverages potential correlations among related tasks to extract common features and yield performance gains. However, most previous works only consider simple or weak interactions, thereby failing to model complex correlations among three or more tasks. In this paper, we propose a multi-task learning architecture with four types of recurrent neural layers to fuse information across multiple related tasks. The architecture is structurally flexible and considers various interactions among tasks, which can be regarded as a generalized case of many previous works. Extensive experiments on five benchmark datasets for text classification show that our model can significantly improve performances of related tasks with additional information from others. | {
"paragraphs": [
[
"Neural network based models have been widely exploited with the prosperities of Deep Learning BIBREF0 and achieved inspiring performances on many NLP tasks, such as text classification BIBREF1 , BIBREF2 , semantic matching BIBREF3 , BIBREF4 and machine translation BIBREF5 . These models are robust at feature engineering and can represent words, sentences and documents as fix-length vectors, which contain rich semantic information and are ideal for subsequent NLP tasks.",
"One formidable constraint of deep neural networks (DNN) is their strong reliance on large amounts of annotated corpus due to substantial parameters to train. A DNN trained on limited data is prone to overfitting and incapable to generalize well. However, constructions of large-scale high-quality labeled datasets are extremely labor-intensive. To solve the problem, these models usually employ a pre-trained lookup table, also known as Word Embedding BIBREF6 , to map words into vectors with semantic implications. However, this method just introduces extra knowledge and does not directly optimize the targeted task. The problem of insufficient annotated resources is not solved either.",
"Multi-task learning leverages potential correlations among related tasks to extract common features, increase corpus size implicitly and yield classification improvements. Inspired by BIBREF7 , there are a large literature dedicated for multi-task learning with neural network based models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These models basically share some lower layers to capture common features and further feed them to subsequent task-specific layers, which can be classified into three types:",
"In this paper, we propose a generalized multi-task learning architecture with four types of recurrent neural layers for text classification. The architecture focuses on Type-III, which involves more complicated interactions but has not been researched yet. All the related tasks are jointly integrated into a single system and samples from different tasks are trained in parallel. In our model, every two tasks can directly interact with each other and selectively absorb useful information, or communicate indirectly via a shared intermediate layer. We also design a global memory storage to share common features and collect interactions among all tasks.",
"We conduct extensive experiments on five benchmark datasets for text classification. Compared to learning separately, jointly learning multiple relative tasks in our model demonstrate significant performance gains for each task.",
"Our contributions are three-folds:"
],
[
"For a single supervised text classification task, the input is a word sequences denoted by INLINEFORM0 , and the output is the corresponding class label INLINEFORM1 or class distribution INLINEFORM2 . A lookup layer is used first to get the vector representation INLINEFORM3 of each word INLINEFORM4 . A classification model INLINEFORM5 is trained to transform each INLINEFORM6 into a predicted distribution INLINEFORM7 . DISPLAYFORM0 ",
"and the training objective is to minimize the total cross-entropy of the predicted and true distributions over all samples. DISPLAYFORM0 ",
"where INLINEFORM0 denotes the number of training samples and INLINEFORM1 is the class number."
],
[
"Given INLINEFORM0 supervised text classification tasks, INLINEFORM1 , a jointly learning model INLINEFORM2 is trained to transform multiple inputs into a combination of predicted distributions in parallel. DISPLAYFORM0 ",
"where INLINEFORM0 are sequences from each tasks and INLINEFORM1 are the corresponding predictions.",
"The overall training objective of INLINEFORM0 is to minimize the weighted linear combination of costs for all tasks. DISPLAYFORM0 ",
"where INLINEFORM0 denotes the number of sample collections, INLINEFORM1 and INLINEFORM2 are class numbers and weights for each task INLINEFORM3 respectively."
],
[
"Different tasks may differ in characteristics of the word sequences INLINEFORM0 or the labels INLINEFORM1 . We compare lots of benchmark tasks for text classification and conclude three different perspectives of multi-task learning.",
"Multi-Cardinality Tasks are similar except for cardinality parameters, for example, movie review datasets with different average sequence lengths and class numbers.",
"Multi-Domain Tasks involve contents of different domains, for example, product review datasets on books, DVDs, electronics and kitchen appliances.",
"Multi-Objective Tasks are designed for different objectives, for example, sentiment analysis, topics classification and question type judgment.",
"The simplest multi-task learning scenario is that all tasks share the same cardinality, domain and objective, while come from different sources, so it is intuitive that they can obtain useful information from each other. However, in the most complex scenario, tasks may vary in cardinality, domain and even objective, where the interactions among different tasks can be quite complicated and implicit. We will evaluate our model on different scenarios in the Experiment section."
],
[
"Recently neural network based models have obtained substantial interests in many natural language processing tasks for their capabilities to represent variable-length text sequences as fix-length vectors, for example, Neural Bag-of-Words (NBOW), Recurrent Neural Networks (RNN), Recursive Neural Networks (RecNN) and Convolutional Neural Network (CNN). Most of them first map sequences of words, n-grams or other semantic units into embedding representations with a pre-trained lookup table, then fuse these vectors with different architectures of neural networks, and finally utilize a softmax layer to predict categorical distribution for specific classification tasks. For recurrent neural network, input vectors are absorbed one by one in a recurrent way, which makes RNN particularly suitable for natural language processing tasks."
],
[
"A recurrent neural network maintains a internal hidden state vector INLINEFORM0 that is recurrently updated by a transition function INLINEFORM1 . At each time step INLINEFORM2 , the hidden state INLINEFORM3 is updated according to the current input vector INLINEFORM4 and the previous hidden state INLINEFORM5 . DISPLAYFORM0 ",
"where INLINEFORM0 is usually a composition of an element-wise nonlinearity with an affine transformation of both INLINEFORM1 and INLINEFORM2 .",
"In this way, recurrent neural networks can comprehend a sequence of arbitrary length into a fix-length vector and feed it to a softmax layer for text classification or other NLP tasks. However, gradient vector of INLINEFORM0 can grow or decay exponentially over long sequences during training, also known as the gradient exploding or vanishing problems, which makes it difficult to learn long-term dependencies and correlations for RNNs.",
" BIBREF12 proposed Long Short-Term Memory Network (LSTM) to tackle the above problems. Apart from the internal hidden state INLINEFORM0 , LSTM also maintains a internal hidden memory cell and three gating mechanisms. While there are numerous variants of the standard LSTM, here we follow the implementation of BIBREF13 . At each time step INLINEFORM1 , states of the LSTM can be fully represented by five vectors in INLINEFORM2 , an input gate INLINEFORM3 , a forget gate INLINEFORM4 , an output gate INLINEFORM5 , the hidden state INLINEFORM6 and the memory cell INLINEFORM7 , which adhere to the following transition functions. DISPLAYFORM0 ",
" where INLINEFORM0 is the current input, INLINEFORM1 denotes logistic sigmoid function and INLINEFORM2 denotes element-wise multiplication. By selectively controlling portions of the memory cell INLINEFORM3 to update, erase and forget at each time step, LSTM can better comprehend long-term dependencies with respect to labels of the whole sequences."
],
[
"Based on the LSTM implementation of BIBREF13 , we propose a generalized multi-task learning architecture for text classification with four types of recurrent neural layers to convey information inside and among tasks. Figure FIGREF21 illustrates the structure design and information flows of our model, where three tasks are jointly learned in parallel.",
"As Figure FIGREF21 shows, each task owns a LSTM-based Single Layer for intra-task learning. Pair-wise Coupling Layer and Local Fusion Layer are designed for direct and indirect inter-task interactions. And we further utilize a Global Fusion Layer to maintain a global memory for information shared among all tasks.",
"Each task owns a LSTM-based Single Layer with a collection of parameters INLINEFORM0 , taking Eqs.() for example. DISPLAYFORM0 ",
"Input sequences of each task are transformed into vector representations INLINEFORM0 , which are later recurrently fed into the corresponding Single Layers. The hidden states at the last time step INLINEFORM1 of each Single Layer can be regarded as fix-length representations of the whole sequences, which are followed by a fully connected layer and a softmax non-linear layer to produce class distributions. DISPLAYFORM0 ",
"where INLINEFORM0 is the predicted class distribution for INLINEFORM1 .",
"Besides Single Layers, we design Coupling Layers to model direct pair-wise interactions between tasks. For each pair of tasks, hidden states and memory cells of the Single Layers can obtain extra information directly from each other, as shown in Figure FIGREF21 .",
"We re-define Eqs.( EQREF26 ) and utilize a gating mechanism to control the portion of information flows from one task to another. The memory content INLINEFORM0 of each Single Layer is updated on the leverage of pair-wise couplings. DISPLAYFORM0 ",
" where INLINEFORM0 controls the portion of information flow from INLINEFORM1 to INLINEFORM2 , based on the correlation strength between INLINEFORM3 and INLINEFORM4 at the current time step.",
"In this way, the hidden states and memory cells of each Single Layer can obtain extra information from other tasks and stronger relevance results in higher chances of reception.",
"Different from Coupling Layers, Local Fusion Layers introduce a shared bi-directional LSTM Layer to model indirect pair-wise interactions between tasks. For each pair of tasks, we feed the Local Fusion Layer with the concatenation of both inputs, INLINEFORM0 , as shown in Figure FIGREF21 . We denote the output of the Local Fusion Layer as INLINEFORM1 , a concatenation of hidden states from the forward and backward LSTM at each time step.",
"Similar to Coupling Layers, hidden states and memory cells of the Single Layers can selectively decide how much information to accept from the pair-wise Local Fusion Layers. We re-define Eqs.( EQREF29 ) by considering the interactions between the memory content INLINEFORM0 and outputs of the Local Fusion Layers as follows. DISPLAYFORM0 ",
" where INLINEFORM0 denotes the coupling term in Eqs.( EQREF29 ) and INLINEFORM1 represents the local fusion term. Again, we employ a gating mechanism INLINEFORM2 to control the portion of information flow from the Local Coupling Layers to INLINEFORM3 .",
"Indirect interactions between Single Layers can be pair-wise or global, so we further propose the Global Fusion Layer as a shared memory storage among all tasks. The Global Fusion Layer consists of a bi-directional LSTM Layer with the inputs INLINEFORM0 and the outputs INLINEFORM1 .",
"We denote the global fusion term as INLINEFORM0 and the memory content INLINEFORM1 is calculated as follows. DISPLAYFORM0 ",
"As a result, our architecture covers complicated interactions among different tasks. It is capable of mapping a collection of input sequences from different tasks into a combination of predicted class distributions in parallel, as shown in Eqs.( EQREF11 )."
],
[
"Most previous multi-task learning models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 belongs to Type-I or Type-II. The total number of input samples is INLINEFORM0 , where INLINEFORM1 are the sample numbers of each task.",
"However, our model focuses on Type-III and requires a 4-D tensor INLINEFORM0 as inputs, where INLINEFORM1 are total number of input collections, task number, sequence length and embedding size respectively. Samples from different tasks are jointly learned in parallel so the total number of all possible input collections is INLINEFORM2 . We propose a Task Oriented Sampling algorithm to generate sample collections for improvements of a specific task INLINEFORM3 .",
"[ht] Task Oriented Sampling [1] INLINEFORM0 samples from each task INLINEFORM1 ; INLINEFORM2 , the oriented task index; INLINEFORM3 , upsampling coefficient s.t. INLINEFORM4 sequence collections INLINEFORM5 and label combinations INLINEFORM6 ",
"each INLINEFORM0 generate a set INLINEFORM1 with INLINEFORM2 samples for each task: INLINEFORM3 repeat each sample for INLINEFORM4 times INLINEFORM5 randomly select INLINEFORM6 samples without replacements randomly select INLINEFORM7 samples with replacements each INLINEFORM8 randomly select a sample from each INLINEFORM9 without replacements combine their features and labels as INLINEFORM10 and INLINEFORM11 merge all INLINEFORM12 and INLINEFORM13 to produce the sequence collections INLINEFORM14 and label combinations INLINEFORM15 ",
"Given the generated sequence collections INLINEFORM0 and label combinations INLINEFORM1 , the overall loss function can be calculated based on Eqs.( EQREF12 ) and ( EQREF27 ). The training process is conducted in a stochastic manner until convergence. For each loop, we randomly select a collection from the INLINEFORM2 candidates and update the parameters by taking a gradient step."
],
[
"In this section, we design three different scenarios of multi-task learning based on five benchmark datasets for text classification. we investigate the empirical performances of our model and compare it to existing state-of-the-art models."
],
[
"As Table TABREF35 shows, we select five benchmark datasets for text classification and design three experiment scenarios to evaluate the performances of our model.",
"Multi-Cardinality Movie review datasets with different average lengths and class numbers, including SST-1 BIBREF14 , SST-2 and IMDB BIBREF15 .",
"Multi-Domain Product review datasets on different domains from Multi-Domain Sentiment Dataset BIBREF16 , including Books, DVDs, Electronics and Kitchen.",
"Multi-Objective Classification datasets with different objectives, including IMDB, RN BIBREF17 and QC BIBREF18 ."
],
[
"The whole network is trained through back propagation with stochastic gradient descent BIBREF19 . We obtain a pre-trained lookup table by applying Word2Vec BIBREF20 on the Google News corpus, which contains more than 100B words with a vocabulary size of about 3M. All involved parameters are randomly initialized from a truncated normal distribution with zero mean and standard deviation.",
"For each task INLINEFORM0 , we conduct TOS with INLINEFORM1 to improve its performance. After training our model on the generated sample collections, we evaluate the performance of task INLINEFORM2 by comparing INLINEFORM3 and INLINEFORM4 on the test set. We apply 10-fold cross-validation and different combinations of hyperparameters are investigated, of which the best one, as shown in Table TABREF41 , is reserved for comparisons with state-of-the-art models."
],
[
"We compare performances of our model with the implementation of BIBREF13 and the results are shown in Table TABREF43 . Our model obtains better performances in Multi-Domain scenario with an average improvement of 4.5%, where datasets are product reviews on different domains with similar sequence lengths and the same class number, thus producing stronger correlations. Multi-Cardinality scenario also achieves significant improvements of 2.77% on average, where datasets are movie reviews with different cardinalities.",
"However, Multi-Objective scenario benefits less from multi-task learning due to lacks of salient correlation among sentiment, topic and question type. The QC dataset aims to classify each question into six categories and its performance even gets worse, which may be caused by potential noises introduced by other tasks. In practice, the structure of our model is flexible, as couplings and fusions between some empirically unrelated tasks can be removed to alleviate computation costs.",
"We further explore the influence of INLINEFORM0 in TOS on our model, which can be any positive integer. A higher value means larger and more various samples combinations, while requires higher computation costs.",
"Figure FIGREF45 shows the performances of datasets in Multi-Domain scenario with different INLINEFORM0 . Compared to INLINEFORM1 , our model can achieve considerable improvements when INLINEFORM2 as more samples combinations are available. However, there are no more salient gains as INLINEFORM3 gets larger and potential noises from other tasks may lead to performance degradations. For a trade-off between efficiency and effectiveness, we determine INLINEFORM4 as the optimal value for our experiments.",
"In order to measure the correlation strength between two task INLINEFORM0 and INLINEFORM1 , we learn them jointly with our model and define Pair-wise Performance Gain as INLINEFORM2 , where INLINEFORM3 are the performances of tasks INLINEFORM4 and INLINEFORM5 when learned individually and jointly.",
"We calculate PPGs for every two tasks in Table TABREF35 and illustrate the results in Figure FIGREF47 , where darkness of colors indicate strength of correlation. It is intuitive that datasets of Multi-Domain scenario obtain relatively higher PPGs with each other as they share similar cardinalities and abundant low-level linguistic characteristics. Sentences of QC dataset are much shorter and convey unique characteristics from other tasks, thus resulting in quite lower PPGs."
],
[
"We apply the optimal hyperparameter settings and compare our model against the following state-of-the-art models:",
"NBOW Neural Bag-of-Words that simply sums up embedding vectors of all words.",
"PV Paragraph Vectors followed by logistic regression BIBREF21 .",
"MT-RNN Multi-Task learning with Recurrent Neural Networks by a shared-layer architecture BIBREF11 .",
"MT-CNN Multi-Task learning with Convolutional Neural Networks BIBREF8 where lookup tables are partially shared.",
"MT-DNN Multi-Task learning with Deep Neural Networks BIBREF9 that utilizes bag-of-word representations and a hidden shared layer.",
"GRNN Gated Recursive Neural Network for sentence modeling BIBREF1 .",
"As Table TABREF48 shows, our model obtains competitive or better performances on all tasks except for the QC dataset, as it contains poor correlations with other tasks. MT-RNN slightly outperforms our model on SST, as sentences from this dataset are much shorter than those from IMDB and MDSD, and another possible reason may be that our model are more complex and requires larger data for training. Our model proposes the designs of various interactions including coupling, local and global fusion, which can be further implemented by other state-of-the-art models and produce better performances."
],
[
"There are a large body of literatures related to multi-task learning with neural networks in NLP BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 .",
" BIBREF8 belongs to Type-I and utilizes shared lookup tables for common features, followed by task-specific neural layers for several traditional NLP tasks such as part-of-speech tagging and semantic parsing. They use a fix-size window to solve the problem of variable-length texts, which can be better handled by recurrent neural networks.",
" BIBREF9 , BIBREF10 , BIBREF11 all belong to Type-II where samples from different tasks are learned sequentially. BIBREF9 applies bag-of-word representation and information of word orders are lost. BIBREF10 introduces an external memory for information sharing with a reading/writing mechanism for communicating, and BIBREF11 proposes three different models for multi-task learning with recurrent neural networks. However, models of these two papers only involve pair-wise interactions, which can be regarded as specific implementations of Coupling Layer and Fusion Layer in our model.",
"Different from the above models, our model focuses on Type-III and utilize recurrent neural networks to comprehensively capture various interactions among tasks, both direct and indirect, local and global. Three or more tasks are learned simultaneously and samples from different tasks are trained in parallel benefitting from each other, thus obtaining better sentence representations."
],
[
"In this paper, we propose a multi-task learning architecture for text classification with four types of recurrent neural layers. The architecture is structurally flexible and can be regarded as a generalized case of many previous works with deliberate designs. We explore three different scenarios of multi-task learning and our model can improve performances of most tasks with additional related information from others in all scenarios.",
"In future work, we would like to investigate further implementations of couplings and fusions, and conclude more multi-task learning perspectives."
]
],
"section_name": [
"Introduction",
"Single-Task Learning",
"Multi-Task Learning",
"Three Perspectives of Multi-Task Learning",
"Methodology",
"Recurrent Neural Network",
"A Generalized Architecture",
"Sampling & Training",
"Experiment",
"Datasets",
"Hyperparameters and Training",
"Results",
"Comparisons with State-of-the-art Models",
"Related Work",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"474671ec2742e393afe610a6ed66fdab2f572bc8",
"d153ad58cda1bdd8ec248d5efe149172cfb561a8"
],
"answer": [
{
"evidence": [
"In this section, we design three different scenarios of multi-task learning based on five benchmark datasets for text classification. we investigate the empirical performances of our model and compare it to existing state-of-the-art models."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"we investigate the empirical performances of our model and compare it to existing state-of-the-art models."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In this section, we design three different scenarios of multi-task learning based on five benchmark datasets for text classification. we investigate the empirical performances of our model and compare it to existing state-of-the-art models."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this section, we design three different scenarios of multi-task learning based on five benchmark datasets for text classification. we investigate the empirical performances of our model and compare it to existing state-of-the-art models."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0aa996cb4caaa57df45d3bb1ee1697e0d3f942cd",
"82d361aa7c158b7706d8d5d99749888eebb559ad"
],
"answer": [
{
"evidence": [
"Multi-Cardinality Movie review datasets with different average lengths and class numbers, including SST-1 BIBREF14 , SST-2 and IMDB BIBREF15 .",
"Multi-Domain Product review datasets on different domains from Multi-Domain Sentiment Dataset BIBREF16 , including Books, DVDs, Electronics and Kitchen.",
"Multi-Objective Classification datasets with different objectives, including IMDB, RN BIBREF17 and QC BIBREF18 ."
],
"extractive_spans": [
"SST-1 BIBREF14",
"SST-2",
"IMDB BIBREF15",
"Multi-Domain Sentiment Dataset BIBREF16",
"RN BIBREF17",
"QC BIBREF18"
],
"free_form_answer": "",
"highlighted_evidence": [
"Multi-Cardinality Movie review datasets with different average lengths and class numbers, including SST-1 BIBREF14 , SST-2 and IMDB BIBREF15 .\n\nMulti-Domain Product review datasets on different domains from Multi-Domain Sentiment Dataset BIBREF16 , including Books, DVDs, Electronics and Kitchen.\n\nMulti-Objective Classification datasets with different objectives, including IMDB, RN BIBREF17 and QC BIBREF18 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Multi-Cardinality Movie review datasets with different average lengths and class numbers, including SST-1 BIBREF14 , SST-2 and IMDB BIBREF15 .",
"Multi-Domain Product review datasets on different domains from Multi-Domain Sentiment Dataset BIBREF16 , including Books, DVDs, Electronics and Kitchen.",
"Multi-Objective Classification datasets with different objectives, including IMDB, RN BIBREF17 and QC BIBREF18 ."
],
"extractive_spans": [
"SST-1",
"SST-2",
"IMDB",
"Multi-Domain Sentiment Dataset",
"RN",
"QC"
],
"free_form_answer": "",
"highlighted_evidence": [
"Multi-Cardinality Movie review datasets with different average lengths and class numbers, including SST-1 BIBREF14 , SST-2 and IMDB BIBREF15 .\n\nMulti-Domain Product review datasets on different domains from Multi-Domain Sentiment Dataset BIBREF16 , including Books, DVDs, Electronics and Kitchen.\n\nMulti-Objective Classification datasets with different objectives, including IMDB, RN BIBREF17 and QC BIBREF18 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1385da8338772d085992109bba81a739558a844e",
"4a97c535d14dd42be16989691812941d97443eda"
],
"answer": [
{
"evidence": [
"As Table TABREF35 shows, we select five benchmark datasets for text classification and design three experiment scenarios to evaluate the performances of our model.",
"Multi-Cardinality Movie review datasets with different average lengths and class numbers, including SST-1 BIBREF14 , SST-2 and IMDB BIBREF15 .",
"Multi-Domain Product review datasets on different domains from Multi-Domain Sentiment Dataset BIBREF16 , including Books, DVDs, Electronics and Kitchen.",
"Multi-Objective Classification datasets with different objectives, including IMDB, RN BIBREF17 and QC BIBREF18 ."
],
"extractive_spans": [
"different average lengths and class numbers",
"Multi-Domain Product review datasets on different domains",
"Multi-Objective Classification datasets with different objectives"
],
"free_form_answer": "",
"highlighted_evidence": [
"As Table TABREF35 shows, we select five benchmark datasets for text classification and design three experiment scenarios to evaluate the performances of our model.\n\nMulti-Cardinality Movie review datasets with different average lengths and class numbers, including SST-1 BIBREF14 , SST-2 and IMDB BIBREF15 .\n\nMulti-Domain Product review datasets on different domains from Multi-Domain Sentiment Dataset BIBREF16 , including Books, DVDs, Electronics and Kitchen.\n\nMulti-Objective Classification datasets with different objectives, including IMDB, RN BIBREF17 and QC BIBREF18 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Five benchmark classification datasets: SST, IMDB, MDSD, RN, QC."
],
"extractive_spans": [],
"free_form_answer": "Sentiment classification, topics classification, question classification.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Five benchmark classification datasets: SST, IMDB, MDSD, RN, QC."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"aab80405a1b1bcd89a747c56a9ce4cbf466fdfc9",
"c2ccdf63431be581639135f795430499874de30c"
],
"answer": [
{
"evidence": [
"Based on the LSTM implementation of BIBREF13 , we propose a generalized multi-task learning architecture for text classification with four types of recurrent neural layers to convey information inside and among tasks. Figure FIGREF21 illustrates the structure design and information flows of our model, where three tasks are jointly learned in parallel."
],
"extractive_spans": [
"LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"Based on the LSTM implementation of BIBREF13 , we propose a generalized multi-task learning architecture for text classification with four types of recurrent neural layers to convey information inside and among tasks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Based on the LSTM implementation of BIBREF13 , we propose a generalized multi-task learning architecture for text classification with four types of recurrent neural layers to convey information inside and among tasks. Figure FIGREF21 illustrates the structure design and information flows of our model, where three tasks are jointly learned in parallel."
],
"extractive_spans": [],
"free_form_answer": "LSTM with 4 types of recurrent neural layers.",
"highlighted_evidence": [
"Based on the LSTM implementation of BIBREF13 , we propose a generalized multi-task learning architecture for text classification with four types of recurrent neural layers to convey information inside and among tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they compare against state-of-the-art?",
"What are the benchmark datasets?",
"What tasks are the models trained on?",
"What recurrent neural networks are explored?"
],
"question_id": [
"5752c8d333afc1e6c666b18d1477c8f669b7a602",
"fcdafaea5b1c9edee305b81f6865efc8b8dc50d3",
"91d4fd5796c13005fe306bcd895caaed7fa77030",
"27d7a30e42921e77cfffafac5cb0d16ce5a7df99"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: A generalized recurrent neural architecture for modeling text with multi-task learning",
"Table 1: Five benchmark classification datasets: SST, IMDB, MDSD, RN, QC.",
"Figure 2: Influences of n0 in TOS on different datasets",
"Table 2: Hyperparameter settings",
"Table 4: Comparisons with state-of-the-art models",
"Table 3: Results of our model on different scenarios"
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"5-Figure2-1.png",
"5-Table2-1.png",
"6-Table4-1.png",
"6-Table3-1.png"
]
} | [
"What tasks are the models trained on?",
"What recurrent neural networks are explored?"
] | [
[
"1707.02892-Datasets-3",
"1707.02892-Datasets-2",
"1707.02892-Datasets-1",
"1707.02892-Datasets-0",
"1707.02892-5-Table1-1.png"
],
[
"1707.02892-A Generalized Architecture-0"
]
] | [
"Sentiment classification, topics classification, question classification.",
"LSTM with 4 types of recurrent neural layers."
] | 286 |
2002.06851 | GameWikiSum: a Novel Large Multi-Document Summarization Dataset | Today's research progress in the field of multi-document summarization is obstructed by the small number of available datasets. Since the acquisition of reference summaries is costly, existing datasets contain only hundreds of samples at most, resulting in heavy reliance on hand-crafted features or necessitating additional, manually annotated data. The lack of large corpora therefore hinders the development of sophisticated models. Additionally, most publicly available multi-document summarization corpora are in the news domain, and no analogous dataset exists in the video game domain. In this paper, we propose GameWikiSum, a new domain-specific dataset for multi-document summarization, which is one hundred times larger than commonly used datasets, and in another domain than news. Input documents consist of long professional video game reviews as well as references of their gameplay sections in Wikipedia pages. We analyze the proposed dataset and show that both abstractive and extractive models can be trained on it. We release GameWikiSum for further research: this https URL. | {
"paragraphs": [
[
"With the growth of the internet in the last decades, users are faced with an increasing amount of information and have to find ways to summarize it. However, producing summaries in a multi-document setting is a challenging task; the language used to display the same information in a sentence can vary significantly, making it difficult for summarization models to capture. Thus large corpora are needed to develop efficient models. There exist two types of summarization: extractive and abstractive. Extractive summarization outputs summaries in two steps, namely via sentence ranking, where an importance score is assigned to each sentence, and via the subsequent sentence selection, where the most appropriate sentence is chosen. In abstractive summarization, summaries are generated word by word auto-regressively, using sequence-to-sequence or language models. Given the complexity of multi-document summarization and the lack of datasets, most researchers use extractive summarization and rely on hand-crafted features or additional annotated data, both needing human expertise.",
"To our knowledge, wiki2018 is the only work that has proposed a large dataset for multi-document summarization. By considering Wikipedia entries as a collection of summaries on various topics given by their title (e.g., Machine Learning, Stephen King), they create a dataset of significant size, where the lead section of an article is defined as the reference summary and input documents are a mixture of pages obtained from the article's reference section and a search engine. While this approach benefits from the large number of Wikipedia articles, in many cases, articles contain only a few references that tend to be of the desired high quality, and most input documents end up being obtained via a search engine, which results in noisy data. Moreover, at testing time no references are provided, as they have to be provided by human contributors. wiki2018 showed that in this case, generated summaries based on search engine results alone are of poor quality and cannot be used.",
"In contrast, we propose a novel domain-specific dataset containing $14\\,652$ samples, based on professional video game reviews obtained via Metacritic and gameplay sections from Wikipedia. By using Metacritic reviews in addition to Wikipedia articles, we benefit from a number of factors. First, the set of aspects used to assess a game is limited and consequently, reviews share redundancy. Second, because they are written by professional journalists, reviews tend to be in-depth and of high-quality. Additionally, when a video game is released, journalists have an incentive to write a complete review and publish it online as soon as possible to draw the attention of potential customers and increase the revenue of their website BIBREF0. Therefore, several reviews for the same product become quickly available and the first version of the corresponding Wikipedia page is usually made available shortly after. Lastly, reviews and Wikipedia pages are available in multiple languages, which opens up the possibility for multilingual multi-document summarization."
],
[
"In this section, we introduce a new domain-specific corpus for the task of multi-document summarization, based on professional video game reviews and gameplay sections of Wikipedia."
],
[
"Journalists are paid to write complete reviews for various types of entertainment products, describing different aspects thoroughly. Reviewed aspects in video games include the gameplay, richness, and diversity of dialogues, or the soundtrack. Compared to usual reviews written by users, these are assumed to be of higher-quality and longer.",
"Metacritic is a website aggregating music, game, TV series, and movie reviews. In our case, we only focus on the video game section and crawl different products with their associated links, pointing to professional reviews written by journalists. It is noteworthy that we consider reviews for the same game released on different platforms (e.g., Playstation, Xbox) separately. Indeed, the final product quality might differ due to hardware constraints and some websites are specialized toward a specific platform.",
"Given a collection of professional reviews, manually creating a summary containing all key information is too costly at large scale as reviews tend to be long and thorough. To this end, we analyzed Wikipedia pages for various video games and observed that most contain a gameplay section, that is an important feature in video game reviews. Consequently, we opt for summaries describing only gameplay mechanics. Wikipedia pages are written following the Wikipedia Manual of Style and thus, guarantee summaries of a fairly uniform style. Additionally, we observed that the gameplay section often cites excerpts of professional reviews, which adds emphasis to the extractive nature of GameWikiSum.",
"In order to match games with their respective Wikipedia pages, we use the game title as the query in the Wikipedia search engine and employ a set of heuristic rules."
],
[
"We crawl approximately $265\\,000$ professional reviews for around $72\\,000$ games and $26\\,000$ Wikipedia gameplay sections. Since there is no automatic mapping between a game to its Wikipedia page, we design some heuristics. The heuristics are the followings and applied in this order:",
"Exact title match: titles must match exactly;",
"Removing tags: when a game has the same name than its franchise, its Wikipedia page has a title similar to Game (year video game) or Game (video game);",
"Extension match: sometimes, a sequel or an extension is not listed in Wikipedia. In this case, we map it to the Wikipedia page of the original game.",
"We only keep games with at least one review and a matching Wikipedia page, containing a gameplay section."
],
[
"We build GameWikiSum corpus by considering English reviews and Wikipedia pages. Table TABREF11 describes its overall properties. Most samples contain several reviews, whose cumulative size is too large for extractive or abstractive models to be trained in an end-to-end manner. The total vocabulary is composed of $282\\,992$ words. Our dataset also comes from a diverse set of sources: over 480 video game websites appear as source documents in at least 6 video games; they are responsible for $99.95\\%$ of the reviews.",
"Following wiki2018, a subset of the input has to be therefore first coarsely selected, using extractive summarization, before training an extractive or abstractive model that generates the Wikipedia gameplay text while conditioning on this extraction. Additionally, half of the summaries contain more than three hundred words (see Table TABREF11), which is larger than previous work.",
"To validate our hypothesis that professional game reviews focus heavily on gameplay mechanics, we compute the proportion of unigrams and bigrams of the output given the input. We observe a significant overlap ($20\\%$ documents containing $67.7\\%$ of the words mentioned in the summary, and at least $27.4\\%$ bigrams in half of the documents), emphasizing the extractive nature of GameWikiSum. Several examples of summaries are shown in Section SECREF20",
"https://www-nlpir.nist.gov/projects/duc/guidelines.html http://www.nist.gov/tac/",
"Table TABREF12 shows a comparison between GameWikiSum and other single and multi-document summarization datasets. GameWikiSum has larger input and output size than single document summarization corpora (used in extractive and abstractive models) while sharing similar word overlap ratios. Compared to DUC and TAC (news domain), GameWikiSum is also domain-specific and has two orders of magnitude more examples, facilitating the use of more powerful models. Finally, WikiSum has more samples but is more suitable for general abstractive summarization, as its articles cover a wide range of areas and have a lower word overlap ratio.",
"We divide GameWikiSum into train, validation and testing sets with a rough ratio of 80/10/10, resulting in $11\\,744$, $1\\,454$ and $1\\,454$ examples respectively. If a game has been released on several platforms (represented by different samples), we group them in the same subset to avoid review overlap between training, validation, and testing. The distribution of samples per platform is shown in Table TABREF13. We compute in addition the mean number of input documents, ROUGE-1, and ROUGE-2 scores of the output given the input. We observe that most platforms have a mean ROUGE-1 score above 80 and 30 for ROUGE-2."
],
[
"We use the standard ROUGE BIBREF4 used in summarization and report the ROUGE-L F1 score. ROUGE-L F1 is more appropriate to measure the quality of generated summaries in this context because summary lengths are longer than usual (see Table TABREF12) and vary across the dataset (see Table TABREF11). Another motivation to use ROUGE-L F1 is to compare abstractive models with extractive ones, as the output length is unknown a priori for the former, but not for the latter. We report in addition ROUGE-1 and ROUGE-2 recall scores.",
"To ensure consistent results across all comparative experiments, extractive models generate summaries of the same length as reference summaries. In realistic scenarios, summary lengths are not pre-defined and can be adjusted to produce different types of summaries (e.g., short, medium or long). We do not explicitly constrain the output length for abstractive models, as each summary is auto-regressively generated."
],
[
"For extractive models, we include LEAD-$k$ which is a strong baseline for single document summarization tasks and takes the first $k$ sentences in the document as summary BIBREF5. TextRank BIBREF6 and LexRank BIBREF7 are two graph-based methods, where nodes are text units and edges are defined by a similarity measure. SumBasic BIBREF8 is a frequency-based sentence selection method, which uses a component to re-weigh the word probabilities in order to minimize redundancy. The last extractive baselines are the near state-of-the-art models C_SKIP from rossiello2017centroid and SemSenSum from antognini2019. The former exploits the capability of word embeddings to leverage semantics, whereas the latter aggregates two types of sentence embeddings using a sentence semantic relation graph, followed by a graph convolution.",
"We use common abstractive sequence-to-sequence baselines such as Conv2Conv BIBREF9, Transformer BIBREF10 and its language model variant, TransformerLM BIBREF3. We use implementations from fairseq and tensor2tensor. As the corpus size is too large to train extractive and abstractive models in an end-to-end manner due to hardware constraints, we use Tf-Idf to coarsely select sentences before training similarly to wiki2018. We limit the input size to 2K tokens so that all models can be trained on a Titan Xp GPU (12GB GPU RAM). We run all models with their best reported parameters."
],
[
"Table TABREF19 contains the results. LEAD-5 achieves less than 20 for ROUGE-L as well as ROUGE-1 and less than $3.5$ for ROUGE-2. Taking only 3 sentences leads to even worse results: below 13 and 3 respectively. Unlike in other datasets, these results are significantly outperformed by all other extractive models but surprisingly, abstractive models perform worse on average. This demonstrates the difficulty of the task in GameWikiSum compared to nallapati2016abstractive and graff2003.",
"For extractive models, TextRank and LexRank perform worse than other models. The frequency-based model SumBasic performs slightly better but does not achieve comparable results with embedding-based models. Best results are obtained with C_SKIP and SemSentSum, showing that more sophisticated models can be trained on GameWikiSum and improve results significantly. Interestingly, taking into account the context of a sentence and hence better capturing the semantics, SemSentSum achieves only slightly better scores than C_SKIP, which relies solely on word embedding. We show in Section SECREF20 several examples with their original summaries and generated ones with the best model.",
"Overall, the abstractive performance of sequence-to-sequence and language models are significantly lower than C_SKIP and SemSentSum in terms of ROUGE-L and ROUGE-1. However, Conv2Conv obtains only $0.05$ less ROUGE-2 score compared to C_SKIP and $0.36$ to SemSentSum. We suspect ROUGE-2 to be easier for abstractive sequence-to-sequence models, as half of the samples only have a ROUGE-2 around $27.00$ without any limitation of the input size (see Table TABREF11). Consequently, copying sentences from a small subset of the whole input documents for extractive models leads to worse ROUGE-2 recall. A normal transformer underperforms compared to Conv2Conv, and its language model variant achieves significantly worse results than other models due to a lack of data.",
"We highlight that GameWikiSum has two orders of magnitude fewer samples (see Table TABREF12) compared to wiki2018. Therefore, it is necessary to have either additional annotated data or pre-train TransformerLM on another corpus."
],
[
"Figure FIGREF21 shows two samples with their gameplay sections from Wikipedia and summaries generated by the best baseline SemSentSum. In the first example, we notice that the model has selected sentences from the reviews that are also in the original Wikipedia page. Additionally, we observe, for both examples, that several text fragments describe the same content with different sentences. Consequently, this supports our hypothesis that professional reviews can be used in a multi-document summarization setting to produce summaries reflecting the gameplay section of Wikipedia pages.",
"https://en.wikipedia.org/wiki/Rabbids_Land https://en.wikipedia.org/wiki/Little_Tournament_Over_Yonder"
],
[
"To the best of our knowledge, DUC and TAC are the first multi-document summarization datasets. They contain documents about the same event and human-written summaries. Unsurprisingly, this approach does not scale and they could only collect hundreds of samples as shown in Table TABREF12.",
"zopf2016next applied a similar strategy using Wikipedia, where they asked annotators to first tag and extract information nuggets from the lead section of Wikipedia articles. In a further step, the same annotators searched for source documents using web search engines. As the whole process depends on humans, they could only collect around one thousand samples. Other attempts such as BIBREF11 have been made using Twitter, but the resulting dataset size was even smaller.",
"Only the recent work of wiki2018 addresses the automatic creation of a large-scale multi-document summarization corpus, WikiSum. Summaries are lead sections of Wikipedia pages and input documents a mixture of 1) its citations from the reference section 2) results from search engines using the title of the Wikipedia page as the query. However, references (provided by contributors) are needed for their model to generate lead sections which are not garbaged texts, as shown in the experiments BIBREF3. Consequently, this dataset is unusable for real use-cases. Similarly, zopf-2018-auto propose a multilingual Multi-Document dataset of approximately $7\\,000$ examples based on English and German Wikipedia articles. We, however, are focused on the video game domain and provide twice more samples."
],
[
"In this work, we introduce a new multi-document summarization dataset, GameWikiSum, based on professional video game reviews, which is one hundred times larger than commonly used datasets. We conclude that the size of GameWikiSum and its domain-specificity makes the training of abstractive and extractive models possible. In future work, we could increase the dataset with other languages and use it for multilingual multi-document summarization. We release GameWikiSum for further research: https://github.com/Diego999/GameWikiSum."
]
],
"section_name": [
"Introduction",
"GameWikiSum",
"GameWikiSum ::: Dataset Creation",
"GameWikiSum ::: Heuristic matching",
"GameWikiSum ::: Descriptive Statistics",
"Experiments and Results ::: Evaluation Metric",
"Experiments and Results ::: Baselines",
"Experiments and Results ::: Results",
"Experiments and Results ::: Examples of Original and Generated Summaries",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"27784077719efe7abfc5a6853396bf257f00a3c0",
"5d084001482430a7df39c785d5d60a49db9e6838"
],
"answer": [
{
"evidence": [
"For extractive models, we include LEAD-$k$ which is a strong baseline for single document summarization tasks and takes the first $k$ sentences in the document as summary BIBREF5. TextRank BIBREF6 and LexRank BIBREF7 are two graph-based methods, where nodes are text units and edges are defined by a similarity measure. SumBasic BIBREF8 is a frequency-based sentence selection method, which uses a component to re-weigh the word probabilities in order to minimize redundancy. The last extractive baselines are the near state-of-the-art models C_SKIP from rossiello2017centroid and SemSenSum from antognini2019. The former exploits the capability of word embeddings to leverage semantics, whereas the latter aggregates two types of sentence embeddings using a sentence semantic relation graph, followed by a graph convolution."
],
"extractive_spans": [
"LEAD-$k$",
"TextRank",
"LexRank",
"SumBasic",
"C_SKIP"
],
"free_form_answer": "",
"highlighted_evidence": [
"For extractive models, we include LEAD-$k$ which is a strong baseline for single document summarization tasks and takes the first $k$ sentences in the document as summary BIBREF5. TextRank BIBREF6 and LexRank BIBREF7 are two graph-based methods, where nodes are text units and edges are defined by a similarity measure. SumBasic BIBREF8 is a frequency-based sentence selection method, which uses a component to re-weigh the word probabilities in order to minimize redundancy. The last extractive baselines are the near state-of-the-art models C_SKIP from rossiello2017centroid and SemSenSum from antognini2019."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For extractive models, we include LEAD-$k$ which is a strong baseline for single document summarization tasks and takes the first $k$ sentences in the document as summary BIBREF5. TextRank BIBREF6 and LexRank BIBREF7 are two graph-based methods, where nodes are text units and edges are defined by a similarity measure. SumBasic BIBREF8 is a frequency-based sentence selection method, which uses a component to re-weigh the word probabilities in order to minimize redundancy. The last extractive baselines are the near state-of-the-art models C_SKIP from rossiello2017centroid and SemSenSum from antognini2019. The former exploits the capability of word embeddings to leverage semantics, whereas the latter aggregates two types of sentence embeddings using a sentence semantic relation graph, followed by a graph convolution."
],
"extractive_spans": [
" LEAD-$k$ ",
"TextRank",
"LexRank ",
"SumBasic ",
"C_SKIP "
],
"free_form_answer": "",
"highlighted_evidence": [
"For extractive models, we include LEAD-$k$ which is a strong baseline for single document summarization tasks and takes the first $k$ sentences in the document as summary BIBREF5. TextRank BIBREF6 and LexRank BIBREF7 are two graph-based methods, where nodes are text units and edges are defined by a similarity measure. SumBasic BIBREF8 is a frequency-based sentence selection method, which uses a component to re-weigh the word probabilities in order to minimize redundancy. The last extractive baselines are the near state-of-the-art models C_SKIP from rossiello2017centroid and SemSenSum from antognini2019. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0af60f58f257a57c72f7f1c924853362279271d4",
"a1eda64c4a46d4df59488ba7552471ef62579827"
],
"answer": [
{
"evidence": [
"We use common abstractive sequence-to-sequence baselines such as Conv2Conv BIBREF9, Transformer BIBREF10 and its language model variant, TransformerLM BIBREF3. We use implementations from fairseq and tensor2tensor. As the corpus size is too large to train extractive and abstractive models in an end-to-end manner due to hardware constraints, we use Tf-Idf to coarsely select sentences before training similarly to wiki2018. We limit the input size to 2K tokens so that all models can be trained on a Titan Xp GPU (12GB GPU RAM). We run all models with their best reported parameters."
],
"extractive_spans": [
"Conv2Conv ",
"Transformer ",
" TransformerLM"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use common abstractive sequence-to-sequence baselines such as Conv2Conv BIBREF9, Transformer BIBREF10 and its language model variant, TransformerLM BIBREF3. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use common abstractive sequence-to-sequence baselines such as Conv2Conv BIBREF9, Transformer BIBREF10 and its language model variant, TransformerLM BIBREF3. We use implementations from fairseq and tensor2tensor. As the corpus size is too large to train extractive and abstractive models in an end-to-end manner due to hardware constraints, we use Tf-Idf to coarsely select sentences before training similarly to wiki2018. We limit the input size to 2K tokens so that all models can be trained on a Titan Xp GPU (12GB GPU RAM). We run all models with their best reported parameters."
],
"extractive_spans": [
"Conv2Conv",
"Transformer",
"TransformerLM"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use common abstractive sequence-to-sequence baselines such as Conv2Conv BIBREF9, Transformer BIBREF10 and its language model variant, TransformerLM BIBREF3."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"15b35027fe6c0c51d7cd6ac74ae71494280d0ade",
"e6fb9f53a079bd5768204d5ca504a56ee50384df"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Metacritic is a website aggregating music, game, TV series, and movie reviews. In our case, we only focus on the video game section and crawl different products with their associated links, pointing to professional reviews written by journalists. It is noteworthy that we consider reviews for the same game released on different platforms (e.g., Playstation, Xbox) separately. Indeed, the final product quality might differ due to hardware constraints and some websites are specialized toward a specific platform."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In our case, we only focus on the video game section and crawl different products with their associated links, pointing to professional reviews written by journalists. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"24e6e383254c48226886f0de0942cbfac4e880ec",
"309747b2a477587a44c733cf1835a70934e650c0"
],
"answer": [
{
"evidence": [
"In contrast, we propose a novel domain-specific dataset containing $14\\,652$ samples, based on professional video game reviews obtained via Metacritic and gameplay sections from Wikipedia. By using Metacritic reviews in addition to Wikipedia articles, we benefit from a number of factors. First, the set of aspects used to assess a game is limited and consequently, reviews share redundancy. Second, because they are written by professional journalists, reviews tend to be in-depth and of high-quality. Additionally, when a video game is released, journalists have an incentive to write a complete review and publish it online as soon as possible to draw the attention of potential customers and increase the revenue of their website BIBREF0. Therefore, several reviews for the same product become quickly available and the first version of the corresponding Wikipedia page is usually made available shortly after. Lastly, reviews and Wikipedia pages are available in multiple languages, which opens up the possibility for multilingual multi-document summarization."
],
"extractive_spans": [],
"free_form_answer": "14652",
"highlighted_evidence": [
"In contrast, we propose a novel domain-specific dataset containing $14\\,652$ samples, based on professional video game reviews obtained via Metacritic and gameplay sections from Wikipedia. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We crawl approximately $265\\,000$ professional reviews for around $72\\,000$ games and $26\\,000$ Wikipedia gameplay sections. Since there is no automatic mapping between a game to its Wikipedia page, we design some heuristics. The heuristics are the followings and applied in this order:"
],
"extractive_spans": [
"$265\\,000$ professional reviews for around $72\\,000$ games and $26\\,000$ Wikipedia gameplay sections"
],
"free_form_answer": "",
"highlighted_evidence": [
"We crawl approximately $265\\,000$ professional reviews for around $72\\,000$ games and $26\\,000$ Wikipedia gameplay sections."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What extractive models were trained on this dataset?",
"What abstractive models were trained?",
"Do the reviews focus on a specific video game domain?",
"What is the size of this dataset?"
],
"question_id": [
"7561bd3b8ba7829b3a01ff07f9f3e93a7b8869cc",
"a3ba21341f0cb79d068d24de33b23c36fa646752",
"96295e1fe8713417d2b4632438a95d23831fbbdc",
"5bfbc9ca7fd41be9627f6ef587bb7e21c7983be0"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"dataset",
"dataset",
"dataset",
"dataset"
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Percentiles for different aspects of GameWikiSum. Size is in number of words. ROUGE scores are computed with a summary given its reviews.",
"Table 2: Sizes and unigram recall of single (marked with *) and multi-document summarization datasets. Recall is computed with reference summaries given the input documents.",
"Table 3: Game distribution over platforms with their average and standard deviation number of input documents and ROUGE scores.",
"Table 4: Comparison extractive and abstractive (marked with *) models. Reported scores correspond to ROUGEL F1 score, ROUGE-1 and ROUGE-2 recall respectively."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png"
]
} | [
"What is the size of this dataset?"
] | [
[
"2002.06851-GameWikiSum ::: Heuristic matching-0",
"2002.06851-Introduction-2"
]
] | [
"14652"
] | 287 |
1908.07721 | Fine-tuning BERT for Joint Entity and Relation Extraction in Chinese Medical Text | Entity and relation extraction is the necessary step in structuring medical text. However, the feature extraction ability of the bidirectional long short term memory network in the existing model does not achieve the best effect. At the same time, the language model has achieved excellent results in more and more natural language processing tasks. In this paper, we present a focused attention model for the joint entity and relation extraction task. Our model integrates well-known BERT language model into joint learning through dynamic range attention mechanism, thus improving the feature representation ability of shared parameter layer. Experimental results on coronary angiography texts collected from Shuguang Hospital show that the F1-scores of named entity recognition and relation classification tasks reach 96.89% and 88.51%, which outperform state-of-the-art methods by 1.65% and 1.22%, respectively. | {
"paragraphs": [
[
"UTF8gkai With the widespread of electronic health records (EHRs) in recent years, a large number of EHRs can be integrated and shared in different medical environments, which further support the clinical decision making and government health policy formulationBIBREF0. However, most of the information in current medical records is stored in natural language texts, which makes data mining algorithms unable to process these data directly. To extract relational entity triples from the text, researchers generally use entity and relation extraction algorithm, and rely on the central word to convert the triples into key-value pairs, which can be processed by conventional data mining algorithms directly. Fig. FIGREF1 shows an example of entity and relation extraction in the text of EHRs. The text contains three relational entity triples, i.e., $<$咳嗽, 程度等级, 反复$>$ ($<$cough, degree, repeated$>$), $<$咳痰, 程度等, 反复$>$ ($<$expectoration, degree, repeated$>$) and $<$发热, 存在情况, 无$>$ ($<$fever, presence, nonexistent$>$). By using the symptom as the central word, these triples can then be converted into three key-value pairs, i.e., $<$咳嗽的程度等级, 反复$>$ ($<$degree of cough, repeated$>$), $<$咳痰的程度等级, 反复$>$ ($<$degree of expectoration, repeated$>$) and $<$发热的存在情况, 无$>$ ($<$presence of fever, nonexistent$>$).",
"UTF8gkai To solve the task of entity and relation extraction, researchers usually follows pipeline processing and split the task into two sub-tasks, namely named entity recognition (NER)BIBREF1 and relation classification (RC)BIBREF2, respectively.",
"However, this pipeline method usually fails to capture joint features between entity and relationship types. For example, for a valid relation “存在情况(presence)” in Fig. FIGREF1, the types of its two relational entities must be “疾病(disease)”, “症状(symptom)” or “存在词(presence word)”. To capture these joint features, a large number of joint learning models have been proposed BIBREF3, BIBREF4, among which bidirectional long short term memory (Bi-LSTM) BIBREF5, BIBREF6 are commonly used as the shared parameter layer. However, compared with the language models that benefit from abundant knowledge from pre-training and strong feature extraction capability, Bi-LSTM model has relatively lower generalization performance.",
"To improve the performance, a simple solution is to incorporate language model into joint learning as a shared parameter layer. However, the existing models only introduce language models into the NER or RC task separately BIBREF7, BIBREF8. Therefore, the joint features between entity and relationship types still can not be captured. Meanwhile, BIBREF9 considered the joint features, but it also uses Bi-LSTM as the shared parameter layer, resulting the same problem as discussed previously.",
"Given the aforementioned challenges and current researches, we propose a focused attention model based on widely known BERT language model BIBREF10 to jointly for NER and RC tasks. Specifically, through the dynamic range attention mechanism, we construct task-specific MASK matrix to control the attention range of the last $K$ layers in BERT language model, leading to the model focusing on the words of the task. This process helps obtain the corresponding task-specific context-dependent representations. In this way, the modified BERT language model can be used as the shared parameter layer in joint learning NER and RC task. We call the modified BERT language model shared task representation encoder (STR-encoder) in the following paper.",
"To sum up, the main contributions of our work are summarized as follows:",
"We propose a focused attention model to jointly learn NER and RC task. The model integrates BERT language model as a shared parameter layer to achieve better generalization performance.",
"In the proposed model, we incorporate a novel structure, called STR-encoder, which changes the attention range of the last $K$ layers in BERT language model to obtain task-specific context-dependent representations. It can make full use of the original structure of BERT to produce the vector of the task, and can directly use the prior knowledge contained in the pre-trained language model.",
"For RC task, we proposed two different MASK matrices to extract the required feature representation of RC task. The performances of these two matrices are analyzed and compared in the experiment.",
"The rest of the paper is organized as follows. We briefly review the related work on NER, RC and joint entity and relation extraction in Section SECREF2. In Section SECREF3, we present the proposed focused attention model. We report the experimental results in Section SECREF4. Section SECREF5 is dedicated to studying several key factors that affect the performance of our model. Finally, conclusion and future work are given in Section SECREF6."
],
[
"Entity and relation extraction is to extract relational entity triplets which are composed of two entities and their relationship. Pipeline and joint learning are two kinds of methods to handle this task. Pipeline methods try to solve it as two subsequent tasks, namely named entity recognition (NER) and relation classification (RC), while joint learning methods attempt to solve the two tasks simultaneously."
],
[
"NER is a primary task in information extraction. In generic domain, we recognize name, location and time from text, while in medical domain, we are interested in disease and symptom. Generally, NER is solved as a sequence tagging task by using BIEOS(Begin, Inside, End, Outside, Single) BIBREF11 tagging strategy. Conventional NER in medical domain can be divided into two categories, i.e., statistical and neural network methods. The former are generally based on conditional random fields (CRF)BIBREF12 and hidden Markov models BIBREF13, BIBREF14, which relies on hand-crafted features and external knowledge resources to improve the accuracy. Neural network methods typically use neural network to calculate the features without tedious feature engineering, e.g., bidirectional long short term memory neural network BIBREF15 and residual dilated convolutional neural network BIBREF16. However, none of the above methods can make use of a large amount of unsupervised corpora, resulting in limited generalization performance."
],
[
"RC is closely related to NER task, which tries to classify the relationship between the entities identified in the text, e.g, “70-80% of the left main coronary artery opening has stenosis\" in the medical text, there is “modifier\" relation between the entity “left main coronary artery\" and the entity “stenosis\". The task is typically formulated into a classification problem that takes a piece of text and two entities in the text as inputs, and the possible relation between the entities as output.",
"The existing methods of RC can be roughly divided into two categories, i.e., traditional methods and neural network approaches. The former are based on feature-basedBIBREF17, BIBREF18, BIBREF19 or kernel-basedBIBREF20 approaches. These models usually spend a lot of time on feature engineering. Neural network methods can extract the relation features without complicated feature engineering. e.g., convolutional neural network BIBREF21, BIBREF22, BIBREF23, recurrent neural network BIBREF24 and long short term memory BIBREF25, BIBREF26. In medical domain, there are recurrent capsule network BIBREF27 and domain invariant convolutional neural network BIBREF28. However, These methods cannot utilize the joint features between entity and relation, resulting in lower generalization performance when compared with joint learning methods."
],
[
"Joint entity and relation extraction tasks solve NER and RC simultaneously. Compared with pipeline methods, joint learning methods are able to capture the joint features between entities and relations BIBREF29.",
"State-of-the-art joint learning methods can be divided into two categories, i.e., joint tagging and parameter sharing methods. Joint tagging transforms NER and RC tasks into sequence tagging tasks through a specially designed tagging scheme, e.g., novel tagging scheme proposed by Zheng et al. BIBREF3. Parameter sharing mechanism shares the feature extraction layer in the models of NER and RC. Compared to joint tagging methods, parameter sharing methods are able to effectively process multi-map problem. The most commonly shared parameter layer in medical domain is the Bi-LSTM network BIBREF9. However, compared with language model, the feature extraction ability of Bi-LSTM is relatively weaker, and the model cannot obtain pre-training knowledge through a large amount of unsupervised corpora, which further reduces the robustness of extracted features."
],
[
"In this section, we introduce classic BERT language model and how to dynamically adjust the range of attention. On this basis, we propose a focused attention model for joint entity and relation extraction."
],
[
"BERT is a language model that utilizes bidirectional attention mechanism and large-scale unsupervised corpora to obtain effective context-sensitive representations of each word in a sentence, e.g. ELMO BIBREF30 and GPT BIBREF31. Since its effective structure and a rich supply of large-scale corporas, BERT has achieved state-of-the-art results on various natural language processing (NLP) tasks, such as question answering and language inference. The basic structure of BERT includes self attention encoder (SA-encoder) and downstream task layer. To handle a variety of downstream tasks, a special classification token called ${[CLS]}$ is added before each input sequence to summarize the overall representation of the sequence. The final hidden state corresponding to the token is the output for classification tasks. Furthermore, SA-encoder includes one embedded layer and $N$ multi-head self-attention layers.",
"The embedding layer is used to obtain the vector representations of all the words in the sequence, and it consists of three components: word embedding (${e}_{word}$), position embedding (${e}_{pos}$), and type embedding (${e}_{type}$). Specifically, word embeddings are obtained through the corresponding embedding matrices. Positional embedding is used to capture the order information of the sequence which is ignored during the self-attention process. Type embedding is used to distinguish two different sequences of the input. Given an input sequence (${S}$), the initial vector representations of all the words in the sequence (${H}_0$) are as follows:",
"where ${LN}$ stands for layer normalization BIBREF32.",
"$N$ multi-head self-attention layers are applied to calculate the context-dependent representations of words (${H}_N$) based on the initial representations (${H}_0$). To solve the problems of gradient vanishing and exploding, ResNet architectureBIBREF33 is applied in the layer. In $N$ multi-head self-attention layers, every layer can produce the output (${H}_{m}$) given the previous output of $(m-1)$-th layer (${H}_{m-1}$):",
"where ${H}_{m}^{\\prime }$ indicates intermediate result in the calculation process of $m$-th layer, ${MHSA}_{h}$ and ${PosFF}$ represent multi-head self-attention and feed-forward that are defined as follows:",
"where $h$ represents the number of self-attention mechanisms in multi-head self-attention layer and ${Att}$ is a single attention mechanism defined as follows:",
"where $Q$, $K$ and $V$ represent “query”, “key” and “value” in the attention calculation process, respectively. Additionally, MASK matrix is used to control the range of attention, which will be analyzed in detail in Section SECREF14.",
"In summary, SA-encoder obtains the corresponding context-dependent representation by inputting the sequence $S$ and the MASK matrix:",
"Finally, the output of SA-encoder is passed to the corresponding downstream task layer to get the final results. In BERT, SA-encoder can connect several downstream task layers. In terms of the content in the paper, the tasks are NER and RC, which will be further detailed in Section SECREF25 and SECREF32."
],
[
"In BERT, MASK matrix is originally used to mask the padding portion of the text. However, we found that by designing a specific MASK matrix, we can directly control the attention range of each word, thus obtaining specific context-sensitive representations. Specially, when calculating the attention (i.e., Equation (DISPLAY_FORM12)), the parameter matrix $MASK\\in {\\lbrace 0,1\\rbrace }^{T\\times T}$, where $T$ is the length of the sequence. If $MASK_{i,j} = 0$, then we have $(MASK_{i,j}-1)\\times \\infty = -\\infty $ and the Equation (DISPLAY_FORM15), which indicates that the $i$-th word ignores the $j$-th word when calculating attention.",
"While $MASK_{i,j} = 1$, we have $(MASK_{i,j}-1)\\times \\infty = 0$ and the Equation (DISPLAY_FORM16), which means the $i$-th word considers the $j$-th word when calculating attention."
],
[
"The architecture of the proposed model is demonstrated in the Fig. FIGREF18. The focused attention model is essentially a joint learning model of NER and RC based on shared parameter approach. It contains layers of shared parameter, NER downstream task and RC downstream task.",
"The shared parameter layer, called shared task representation encoder (STR-encoder), is improved from BERT through dynamic range attention mechanism. It contains an embedded layer and $N$ multi-head self-attention layers which are divided into two blocks. The former $N-K$ layers are only responsible for capturing the context information, and the context-dependent representations of words are expressed as $H_{N-K}$. According to characteristics of NER and RC, the remaining K layers use the $MASK^{task}$ matrix setting by the dynamic range attention mechanism to focus the attention on the words. In this manner, we can obtain task-specific representations $H_N^{task}$ and then pass them to corresponding downstream task layer. In addition, the segmentation point $K$ is a hyperparameter, which is discussed in Section SECREF47.",
"Given a sequence, we add a $[CLS]$ token in front of the sequence as BERT does, and a $[SEP]$ token at the end of the sequence as the end symbol. After the Embedding layer, the initial vector of each word in the sequence $S$ is represented as $H_0$, and is calculated by Equation (DISPLAY_FORM9). Then we input $H_0$ to the former $N-K$ multi-head self-attention layers. In theses layers, attention of a single word is evenly distributed on all the words in the sentence to capture the context information. Given the output (${H}_{m-1}$) from the $(m-1)$-th layer, the output of current layer is calculated as:",
"where $MASK^{all}\\in {\\lbrace 1\\rbrace }^{T\\times T}$ indicates each word calculates attention with all the other words of the sequence.",
"The remaining $K$ layers focus on words of downstream task by task-specific matrix $MASK^{task}$ based on dynamic range attention mechanism. Given the output ($H_{m-1}^{task}$) of previous $(m-1)$-th layer, the model calculate the current output ($H_m^{task}$) as:",
"where $H_{N-K}^{task} =H_{N-K}$ and $task\\in \\lbrace ner,rc\\rbrace $.",
"As for STR-encoder, we only input different $MASK^{task}$ matrices, which calculate various representations of words required by different downstream task ($H_N^{task}$) with the same parameters:",
"This structure has two advantages:",
"It obtains the representation vector of the task through the strong feature extraction ability of BERT. Compared with the complex representation conversion layer, the structure is easier to optimize.",
"It does not significantly adjust the structure of the BERT language model, so the structure can directly use the prior knowledge contained in the parameters of pre-trained language model.",
"Subsequently, we will introduce the construction of $MASK^{task}$ and downstream task layer of NER and RC in blocks."
],
[
"In NER, the model needs to output the corresponding $BIEOS$ tag of each word in the sequence. In order to improve the accuracy, the appropriate attention weight should be learned through parameter optimization rather than limiting the attention range of each word. Therefore, according to the dynamic range attention mechanism, the value of the $MASK^{ner}$ matrix should be set to $MASK_{ner}\\in {\\lbrace 1\\rbrace }^{T\\times T}$, indicating that each word can calculate attention with any other words in the sequence."
],
[
"In NER, the downstream task layer needs to convert the representation vector of each word in the output of STR-encoder into the probability distribution of the corresponding $BIEOS$ tag. Compared with the single-layer neural network, CRF model can capture the link relation between two tags BIBREF34. As a result, we perform CRF layer to get the probability distribution of tags. Specifically, the representation vectors of all the words except $[CLS]$ token in the output of STR-encoder are sent to the CRF layer after self attention layer. Firstly, CRF layer calculates the emission probabilities by linearly transforming these vectors. Afterwards, layer ranks the sequence of tags by means of transition probabilities of the CRF layer. Finally, the probability distribution of tags is obtained by softmax function:",
"$H_N^{ner}$ is the output of STR-encoder when given $MASK^{ner}$, $H_N^{ner}[1:T]$ denotes the representation of all words except $[CLS]$ token. $H_p^{ner}$ is the emission probability matrix of CRF layer, $Score(L|H_p^{ner})$ represents the score of the tag sequence $L$, $A_{L_{t-1},L_t}$ means the probability of the $(t-1)$-th tag transfering to the $t$-th tag, and ${H_p^{ner}}_{t,L_t}$ represents the probability that the $t$-th word is predicted as an $L_t$ tag. $p_{ner}(L|S,MASK^{ner},MASK^{all})$ indicates the probabilities of the tag sequence $L$ when given $S$, $MASK^{ner}$ and $MASK^{all}$, and $J$ is the possible tag sequence.",
"The loss function of NER is shown as Equation (DISPLAY_FORM29), and the training goal is to minimize $L_{ner}$, where $L^{\\prime }$ indicates the real tag sequence."
],
[
"In RC, the relation between two entities are represented by a vector. In order to obtain the vector, we confine the attention range of $[CLS]$ token, which is originally used to summarize the overall representation of the sequence, to two entities. Thus, the vector of $[CLS]$ token can accurately summarize the relation between two entities. Based on the dynamic range attention mechanism, we propose two kinds of $MASK^{rc}$, denoted as Equation (DISPLAY_FORM31) and ().",
"where $P_{CLS}$, $P_{EN1}$ and $P_{EN2}$ represent the positions of $[CLS]$, entity 1 and 2 in sequence S, respectively.",
"The difference between the two matrices is whether the attention range of entity 1 and 2 is confined. In Equation (DISPLAY_FORM31), the attention range of entity 1 and 2 is not confined, which leads to the vector of RC shifting to the context information of entity. Relatively, in Equation (), only $[CLS]$, entity 1 and 2 are able to pay attention to each other, leading the vector of RC shifting to the information of entity itself. Corresponding to the RC task on medical text, the two MASK matrices will be further analyzed in Section SECREF47."
],
[
"For RC, the downstream task layer needs to convert the representation vector of $[CLS]$ token in the output of STR-encoder into the probability distribution of corresponding relation type. In this paper, we use multilayer perceptron (MLP) to carry out this conversion. Specifically, the vector is converted to the probability distribution through two perceptrons with $Tanh$ and $Softmax$ as the activation function, respectively:",
"$H_N^{rc}$ is the output of STR-encoder when given $MASK^{rc}$, $H_N^{rc}[0]$ denotes the representation of $[CLS]$ in the output of STR-encoder, $H_p^{rc}$ is the output of the first perceptron. $p_{rc}(R|S,MASK^{rc},MASK^{all})$ is the output of the second perceptron and represents the probabilities of the relation type $R$ when given the sequence $S$, $MASK^{rc}$ and $MASK^{all}$.",
"The training is to minimize loss function $L_{rc}$, denoted as Equation (DISPLAY_FORM34), where $R^{\\prime }$ indicates the real relation type."
],
[
"Note that, the parameters are shared in the model except the downstream task layers of NER and RC, which enables STR-encoder to learn the joint features of entities and relations. Moreover, compared with the existing parameter sharing model (e.g., Joint-Bi-LSTMBIBREF6), the feature representation ability of STR-encoder is improved by the feature extraction ability of BERT and its knowledge obtained through pre-training."
],
[
"Due to the limitation of deep learning framework, we have to pad sequences to the same length. Therefore, all MASK matrices need to be expanded. The formula for expansion is as follows:",
"where $maxlen$ is the uniform length of the sequence after the padding operation."
],
[
"In this section, we compare the proposed model with NER, RC and joint models. Dataset description and evaluation metrics are first introduced in the following contents, followed by the experimental settings and results."
],
[
"The dataset of entity and relation extraction is collected from coronary arteriography reports in Shanghai Shuguang Hospital. There are five types of entities, i.e., Negation, Body Part, Degree, Quantifier and Location. Five relations are included, i.e., Negative, Modifier, Position, Percentage and No Relation. 85% of “No Relation\" in the dataset are discarded for balance purpose. The statistics of the entities and relations are demonstrated in Table TABREF39 and TABREF40, respectively.",
"In order to ensure the effectiveness of the experiment, we divide the dataset into training, development and test in the ratio of 8:1:1. In the following experiments, we use common performance measures such as Precision, Recall, and F$_1$-score to evaluate NER, RC and joint models."
],
[
"The training of focused attention model proposed in this paper can be divided into two stages. In the first stage, we need to pre-train the shared parameter layer. Due to the high cost of pre-training BERT, we directly adopted parameters pre-trained by Google in Chinese general corpus. In the second stage, we need to fine-tune NER and RC tasks jointly. Parameters of the two downstream task layers are randomly initialized. The parameters are optimized by Adam optimization algorithmBIBREF35 and its learning rate is set to $10^{-5}$ in order to retain the knowledge learned from BERT. Batch size is set to 64 due to graphics memory limitations. The loss function of the model (i.e., $L_{all}$) will be obtained as follows:",
"where $L_{ner}$ is defined in Equation (DISPLAY_FORM29), and $L_{rc}$ is defined in Equation (DISPLAY_FORM34).",
"The two hyperparameters $K$ and $MASK^{rc}$ in the model will be further studied in Section SECREF47. Within a fixed number of epochs, we select the model corresponding to the best relation performance on development dataset."
],
[
"In order to fully verify the performance of focused attention model, we will compare the different methods on the task of NER, RC and joint entity and relation extraction.",
"Based on NER, we experimentally compare our focused attention model with other reference algorithms. These algorithms consist of two NER models in medical domain (i.e., Bi-LSTMBIBREF36 and RDCNNBIBREF16) and one joint model in generic domain (i.e., Joint-Bi-LSTM BIBREF6). In addition, we originally plan to use the joint modelBIBREF9 in the medical domain, but the character-level representations cannot be implemented in Chinese. Therefore, we replace it with a generic domain model BIBREF6 with similar structure. As demonstrated in Table TABREF44, the proposed model achieves the best performance, and its precision, recall and F$_1$-score reach 96.69%, 97.09% and 96.89%, which outperforms the second method by 0.2%, 0.40% and 1.20%, respectively.",
"To further investigate the effectiveness of the proposed model on RC, we use two RC models in medical domain (i.e., RCN BIBREF27 and CNN BIBREF37) and one joint model in generic domain (i.e., Joint-Bi-LSTMBIBREF6) as baseline methods. Since RCN and CNN methods are only applied to RC tasks and cannot extract entities from the text, so we directly use the correct entities in the text to evaluate the RC models. Table TABREF45 illustrate that focused attention model achieves the best performance, and its precision, recall and F$_1$-score reach 96.06%, 96.83% and 96.44%, which beats the second model by 1.57%, 1.59% and 1.58%, respectively.",
"In the task of joint entity and relation extraction, we use Joint-Bi-LSTMBIBREF6 as baseline method. Since both of the models are joint learning, we can use the entities predicted in NER as the input for RC. From Table TABREF46, we can observe that focused attention model achieves the best performance, and its F$_1$-scores reaches 96.89% and 88.51%, which is 1.65% and 1.22% higher than the second method.",
"In conclusion, the experimental results indicate that the feature representation of STR-encoder is indeed stronger than existing common models."
],
[
"In this section, we perform additional experiments to analyze the influence of different settings on segmentation points $K$, different settings on $MASK^{rc}$ and joint learning."
],
[
"In the development dataset, we further study the impacts of different settings on segmentation points $K$ defined in Section SECREF17 and different settings on $MASK^{rc}$ defined in Section SECREF30.",
"As shown in Table TABREF48, when $K=4$ and $MASK^{rc}$ use Equation (), RC reached the best F$_1$-score of 92.18%. When $K=6$ and $MASK^{rc}$ use Equation (DISPLAY_FORM31), NER has the best F$_1$-score of 96.77%. One possible reason is that $MASK^{rc}$ defined in Equation (DISPLAY_FORM31) doesn't confine the attention range of entity 1 and 2, which enables the model to further learn context information in shared parameter layer, leading to a higher F$_1$-score for NER. In contrast, $MASK^{rc}$ defined in Equation () only allows $[CLS]$, entity 1 and 2 to pay attention to each other, which makes the learned features shift to the entities themselves, leading to a higher F$_1$-score of RC.",
"For RC, the F$_1$-score with $K=4$ is the lowest when $MASK^{rc}$ uses Equation (DISPLAY_FORM31), and reaches the highest when $MASK^{rc}$ uses Equation (). One possible reason is that the two hyperparameters are closely related to each other. However, how they interact with each other in focus attention model is still an open question."
],
[
"In order to evaluate the influence of joint learning, we train NER and RC models separately as an ablation experiment. In addition, we use correct entities to evaluate RC, exclude the effect of NER results on the RC results, and independently compare the NRE and RC tasks.",
"As shown in Table TABREF49, compared with training separately, the results are improved by 0.52% score in F$_1$score for NER and 2.37% score in F$_1$score for RC. It shows that joint learning can help to learn the joint features between NER and RC and improves the accuracy of two tasks at the same time. For NER, precision score is improved by 1.55%, but recall score is reduced by 0.55%. One possible reason is that, although the relationship type can guide the model to learn more accurate entity types, it also introduces some uncontrollable noise. In summary, joint learning is an effective method to obtain the best performance."
],
[
"In order to structure medical text, Entity and relation extraction is an indispensable step. In this paper, We propose a focused attention model to jointly learn NER and RC task based on a shared task representation encoder which is transformed from BERT through dynamic range attention mechanism. Compared with existing models, the model can extract the entities and relations from the medical text more accurately. The experimental results on the dataset of coronary angiography texts verify the effectiveness of our model.",
"For future work, the pre-training parameters of BRET used in this paper are pre-trained in the corpus of the generic field so that it cannot fully adapt to the tasks in the medical field. We believe that retrain BRET in the medical field can improve the performance of the model in the specific domain."
],
[
"The authors would like to appreciate the efforts of the editors and valuable comments from the anonymous reviewers. This work is supported by the National Key R&D Program of China for “Precision Medical Research\" under grant 2018YFC0910500."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Named Entity Recognition",
"Related Work ::: Relation Classification",
"Related Work ::: Joint Entity and Relation Extraction",
"Proposed Method",
"Proposed Method ::: BERT Language Model",
"Proposed Method ::: Dynamic Range Attention Mechanism",
"Proposed Method ::: Focused Attention Model",
"Proposed Method ::: Focused Attention Model ::: The Construction of @!START@$MASK^{ner}$@!END@",
"Proposed Method ::: Focused Attention Model ::: The Construction of NER Downstream Task Layer",
"Proposed Method ::: Focused Attention Model ::: The Construction of @!START@$MASK^{rc}$@!END@",
"Proposed Method ::: Focused Attention Model ::: The Construction of RC Downstream Task Layer",
"Proposed Method ::: Joint Learning",
"Proposed Method ::: Additional Instructions for MASK",
"Experimental Studies",
"Experimental Studies ::: Dataset and Evaluation Metrics",
"Experimental Studies ::: Experimental Setup",
"Experimental Studies ::: Experimental Result",
"Experimental Analysis",
"Experimental Analysis ::: Hyperparameter Analysis",
"Experimental Analysis ::: Ablation Analysis",
"Conclusion and Future Work",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"0b60ce01baa68e5b8fc911fb0e5d801bf4ab6295",
"862a258b24c32fb89cd67793dfb2db8a2e4467be"
],
"answer": [
{
"evidence": [
"We propose a focused attention model to jointly learn NER and RC task. The model integrates BERT language model as a shared parameter layer to achieve better generalization performance."
],
"extractive_spans": [],
"free_form_answer": "They train a single model that integrates a BERT language model as a shared parameter layer on NER and RC tasks.",
"highlighted_evidence": [
"We propose a focused attention model to jointly learn NER and RC task. The model integrates BERT language model as a shared parameter layer to achieve better generalization performance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The architecture of the proposed model is demonstrated in the Fig. FIGREF18. The focused attention model is essentially a joint learning model of NER and RC based on shared parameter approach. It contains layers of shared parameter, NER downstream task and RC downstream task."
],
"extractive_spans": [],
"free_form_answer": "They perform joint learning through shared parameters for NER and RC.",
"highlighted_evidence": [
"The architecture of the proposed model is demonstrated in the Fig. FIGREF18. The focused attention model is essentially a joint learning model of NER and RC based on shared parameter approach. It contains layers of shared parameter, NER downstream task and RC downstream task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"28821d29836d81053ac716deb63820c0fd9a615b",
"6ac0783abdac714bcb4950f7378d0373369f9568"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"5e868688852b0a55785baad6e40cecb85232506e",
"d324ed8303cd6ed903bf4fd1d9fbfb77a41e5761"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE V COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF JOINT ENTITY AND RELATION EXTRACTION"
],
"extractive_spans": [],
"free_form_answer": "Joint Bi-LSTM",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE V COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF JOINT ENTITY AND RELATION EXTRACTION"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: TABLE III COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF NER",
"FLOAT SELECTED: TABLE IV COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF RC"
],
"extractive_spans": [],
"free_form_answer": "RDCNN, Joint-Bi-LSTM",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE III COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF NER",
"FLOAT SELECTED: TABLE IV COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF RC"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they perform the joint training?",
"How many parameters does their model have?",
"What is the previous model that achieved state-of-the-art?"
],
"question_id": [
"0b10cfa61595b21bf3ff13b4df0fe1c17bbbf4e9",
"67104a5111bf8ea626532581f20b33b851b5abc1",
"1d40d177c5e410cef1142ec9a5fab9204db22ae1"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"BERT",
"BERT",
"BERT"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. An illustrative example of entity and relation extraction in the text of EHRs.",
"Fig. 2. The architecture of our proposed model.",
"TABLE I STATISTICS OF DIFFERENT TYPES OF ENTITIES",
"TABLE II STATISTICS OF DIFFERENT TYPES OF RELATION",
"TABLE III COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF NER",
"TABLE IV COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF RC",
"TABLE V COMPARISONS WITH THE DIFFERENT METHODS ON THE TASK OF JOINT ENTITY AND RELATION EXTRACTION",
"TABLE VI COMPARISONS WITH THE DIFFERENT HYPERPARAMETERS ON THE TASK OF JOINT ENTITY AND RELATION EXTRACTION",
"TABLE VII COMPARISONS WITH TRAINING NER AND RC TASKS SEPARATELY"
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"5-TableI-1.png",
"6-TableII-1.png",
"6-TableIII-1.png",
"6-TableIV-1.png",
"7-TableV-1.png",
"7-TableVI-1.png",
"7-TableVII-1.png"
]
} | [
"How do they perform the joint training?",
"What is the previous model that achieved state-of-the-art?"
] | [
[
"1908.07721-Proposed Method ::: Focused Attention Model-0",
"1908.07721-Introduction-6"
],
[
"1908.07721-6-TableIV-1.png",
"1908.07721-7-TableV-1.png",
"1908.07721-6-TableIII-1.png"
]
] | [
"They perform joint learning through shared parameters for NER and RC.",
"RDCNN, Joint-Bi-LSTM"
] | 290 |
1908.06024 | Tackling Online Abuse: A Survey of Automated Abuse Detection Methods | Abuse on the Internet represents an important societal problem of our time. Millions of Internet users face harassment, racism, personal attacks, and other types of abuse on online platforms. The psychological effects of such abuse on individuals can be profound and lasting. Consequently, over the past few years, there has been a substantial research effort towards automated abuse detection in the field of natural language processing (NLP). In this paper, we present a comprehensive survey of the methods that have been proposed to date, thus providing a platform for further development of this area. We describe the existing datasets and review the computational approaches to abuse detection, analyzing their strengths and limitations. We discuss the main trends that emerge, highlight the challenges that remain, outline possible solutions, and propose guidelines for ethics and explainability | {
"paragraphs": [
[
"With the advent of social media, anti-social and abusive behavior has become a prominent occurrence online. Undesirable psychological effects of abuse on individuals make it an important societal problem of our time. Munro munro2011 studied the ill-effects of online abuse on children, concluding that children may develop depression, anxiety, and other mental health problems as a result of their encounters online. Pew Research Center, in its latest report on online harassment BIBREF0 , revealed that INLINEFORM0 of adults in the United States have experienced abusive behavior online, of which INLINEFORM1 have faced severe forms of harassment, e.g., that of sexual nature. The report goes on to say that harassment need not be experienced first-hand to have an impact: INLINEFORM2 of American Internet users admitted that they stopped using an online service after witnessing abusive and unruly behavior of their fellow users. These statistics stress the need for automated abuse detection and moderation systems. Therefore, in the recent years, a new research effort on abuse detection has sprung up in the field of NLP.",
"That said, the notion of abuse has proven elusive and difficult to formalize. Different norms across (online) communities can affect what is considered abusive BIBREF1 . In the context of natural language, abuse is a term that encompasses many different types of fine-grained negative expressions. For example, Nobata et al. nobata use it to collectively refer to hate speech, derogatory language and profanity, while Mishra et al. mishra use it to discuss racism and sexism. The definitions for different types of abuse tend to be overlapping and ambiguous. However, regardless of the specific type, we define abuse as any expression that is meant to denigrate or offend a particular person or group. Taking a course-grained view, Waseem et al. W17-3012 classify abuse into broad categories based on explicitness and directness. Explicit abuse comes in the form of expletives, derogatory words or threats, while implicit abuse has a more subtle appearance characterized by the presence of ambiguous terms and figures of speech such as metaphor or sarcasm. Directed abuse targets a particular individual as opposed to generalized abuse, which is aimed at a larger group such as a particular gender or ethnicity. This categorization exposes some of the intricacies that lie within the task of automated abuse detection. While directed and explicit abuse is relatively straightforward to detect for humans and machines alike, the same is not true for implicit or generalized abuse. This is illustrated in the works of Dadvar et al. davdar and Waseem and Hovy waseemhovy: Dadvar et al. observed an inter-annotator agreement of INLINEFORM0 on their cyber-bullying dataset. Cyber-bullying is a classic example of directed and explicit abuse since there is typically a single target who is harassed with personal attacks. On the other hand, Waseem and Hovy noted that INLINEFORM1 of all the disagreements in annotation of their dataset occurred on the sexism class. Sexism is typically both generalized and implicit.",
"In this paper, we survey the methods that have been developed for automated detection of online abuse, analyzing their strengths and weaknesses. We first describe the datasets that exist for abuse. Then we review the various detection methods that have been investigated by the NLP community. Finally, we conclude with the main trends that emerge, highlight the challenges that remain, outline possible solutions, and propose guidelines for ethics and explainability. To the best of our knowledge, this is the first comprehensive survey in this area. We differ from previous surveys BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 in the following respects: 1) we discuss the categorizations of abuse based on coarse-grained vs. fine-grained taxonomies; 2) we present a detailed overview of datasets annotated for abuse; 3) we provide an extensive review of the existing abuse detection methods, including ones based on neural networks (omitted by previous surveys); 4) we discuss the key outstanding challenges in this area; and 5) we cover aspects of ethics and explainability."
],
[
"Supervised learning approaches to abuse detection require annotated datasets for training and evaluation purposes. To date, several datasets manually annotated for abuse have been made available by researchers. These datasets differ in two respects:",
"",
"In what follows, we review several commonly-used datasets manually annotated for abuse.",
"",
"Dataset descriptions. The earliest dataset published in this domain was compiled by Spertus smokey. It consisted of INLINEFORM0 private messages written in English from the web-masters of controversial web resources such as NewtWatch. These messages were marked as flame (containing insults or abuse; INLINEFORM1 ), maybe flame ( INLINEFORM2 ), or okay ( INLINEFORM3 ). We refer to this dataset as data-smokey. Yin et al. Yin09detectionof constructed three English datasets and annotated them for harassment, which they defined as “systematic efforts by a user to belittle the contributions of other users\". The samples were taken from three social media platforms: Kongregate ( INLINEFORM4 posts; INLINEFORM5 harassment), Slashdot ( INLINEFORM6 posts; INLINEFORM7 harassment), and MySpace ( INLINEFORM8 posts; INLINEFORM9 harassment). We refer to the three datasets as data-harass. Several datasets have been compiled using samples taken from portals of Yahoo!, specifically the News and Finance portals. Djuric et al. djuric created a dataset of INLINEFORM10 user comments in English from the Yahoo! Finance website that were editorially labeled as either hate speech ( INLINEFORM11 ) or clean (data-yahoo-fin-dj). Nobata et al. nobata produced four more datasets with comments from Yahoo! News and Yahoo! Finance, each labeled abusive or clean: 1) data-yahoo-fin-a: INLINEFORM12 comments, 7.0% abusive; 2) data-yahoo-news-a: INLINEFORM13 comments, 16.4% abusive; 3) data-yahoo-fin-b: INLINEFORM14 comments, 3.4% abusive; and 4) data-yahoo-news-b: INLINEFORM15 comments, 9.7% abusive.",
"Several groups have investigated abusive language in Twitter. Waseem and Hovy waseemhovy created a corpus of INLINEFORM0 tweets, each annotated as one of racism ( INLINEFORM1 ), sexism, ( INLINEFORM2 ) or neither (data-twitter-wh). We note that although certain tweets in the dataset lack surface-level abusive traits (e.g., @Mich_McConnell Just “her body” right?), they have nevertheless been marked as racist or sexist as the annotators took the wider discourse into account; however, such discourse information or annotation is not preserved in the dataset. Inter-annotator agreement was reported at INLINEFORM3 , with a further insight that INLINEFORM4 of all the disagreements occurred on the sexism class alone. Waseem waseem later released a dataset of INLINEFORM5 tweets annotated as racism ( INLINEFORM6 ), sexism ( INLINEFORM7 ), both ( INLINEFORM8 ), or neither (data-twitter-w). data-twitter-w and data-twitter-wh have INLINEFORM9 tweets in common. It should, however, be noted that the inter-annotator agreement between the two datasets is low (mean pairwise INLINEFORM10 ) BIBREF6 .",
"Davidson et al. davidson created a dataset of approximately INLINEFORM0 tweets, manually annotated as one of racist ( INLINEFORM1 ), offensive but not racist ( INLINEFORM2 ), or clean ( INLINEFORM3 ). We note, however, that their data sampling procedure relied on the presence of certain abusive words and, as a result, the distribution of classes does not follow a real-life distribution. Recently, Founta et al. founta crowd-sourced a dataset (data-twitter-f) of INLINEFORM4 tweets, of which INLINEFORM5 were annotated as normal, INLINEFORM6 as spam, INLINEFORM7 as hateful and INLINEFORM8 as abusive. The OffensEval 2019 shared task used a recently released dataset of INLINEFORM9 tweets BIBREF7 , each hierarchically labeled as: offensive ( INLINEFORM10 ) or not, whether the offence is targeted ( INLINEFORM11 ) or not, and whether it targets an individual ( INLINEFORM12 ), a group ( INLINEFORM13 ) or otherwise ( INLINEFORM14 ).",
"Wulczyn et al. wulczyn annotated English Talk page comments from a dump of the full history of Wikipedia and released three datasets: one focusing on personal attacks ( INLINEFORM0 comments; INLINEFORM1 abusive), one on aggression ( INLINEFORM2 comments), and one on toxicity ( INLINEFORM3 comments; INLINEFORM4 abusive) (data-wiki-att, data-wiki-agg, and data-wiki-tox respectively). data-wiki-agg contains the exact same comments as data-wiki-att but annotated for aggression – the two datasets show a high correlation in the nature of abuse (Pearson's INLINEFORM5 ). Gao and Huang gao2017detecting released a dataset of INLINEFORM6 Fox News user comments (data-fox-news) annotated as hateful ( INLINEFORM7 ) or non-hateful. The dataset preserves context information for each comment, including user's screen-name, all comments in the same thread, and the news article for which the comment is written.",
"Some researchers investigated abuse in languages other than English. Van Hee et al. vanhee gathered INLINEFORM0 Dutch posts from ask.fm to form a dataset on cyber-bullying (data-bully; INLINEFORM1 cyber-bullying cases). Pavlopoulos et al. pavlopoulos-emnlp released a dataset of ca. INLINEFORM2 comments in Greek provided by the news portal Gazzetta (data-gazzetta). The comments were marked as accept or reject, and are divided into 6 splits with similar distributions (the training split is the largest one: INLINEFORM3 accepted and INLINEFORM4 rejected comments). As part of the GermEval shared task on identification of offensive language in German tweets BIBREF8 , a dataset of INLINEFORM5 tweets was released, of which INLINEFORM6 were labeled as abuse, INLINEFORM7 as insult, INLINEFORM8 as profanity, and INLINEFORM9 as other. Around the same time, INLINEFORM10 Facebook posts and comments, each in Hindi (in both Roman and Devanagari script) and English, were released (data-facebook) as part of the COLING 2018 shared task on aggression identification BIBREF9 . INLINEFORM11 of the comments were covertly aggressive, INLINEFORM12 overtly aggressive and INLINEFORM13 non-aggressive. We note, however, that some issues were raised by the participants regarding the quality of the annotations. The HatEval 2019 shared task (forthcoming) focuses on detecting hate speech against immigrants and women using a dataset of INLINEFORM14 tweets in Spanish and INLINEFORM15 in English annotated hierarchically as hateful or not; and, in turn, as aggressive or not, and whether the target is an individual or a group.",
"",
"Remarks. In their study, Ross et al. ross stressed the difficulty in reliably annotating abuse, which stems from multiple factors, such as the lack of “standard” definitions for the myriad types of abuse, differences in annotators' cultural background and experiences, and ambiguity in the annotation guidelines. That said, Waseem et al. W17-3012 and Nobata et al. nobata observed that annotators with prior expertise provide good-quality annotations with high levels of agreement. We note that most datasets contain discrete labels only; abuse detection systems trained on them would be deprived of the notion of severity, which is vital in real-world settings. Also, most datasets cover few types of abuse only. Salminen et al. salminen2018anatomy suggest fine-grained annotation schemes for deeper understanding of abuse; they propose 29 categories that include both types of abuse and their targets (e.g., humiliation, religion)."
],
[
"In this section, we describe abuse detection methods that rely on hand-crafted rules and manual feature engineering. The first documented abuse detection method was designed by Spertus smokey who used a heuristic rule-based approach to produce feature vectors for the messages in the data-smokey dataset, followed by a decision tree generator to train a classification model. The model achieved a recall of INLINEFORM0 on the flame messages, and INLINEFORM1 on the non-flame ones in the test set. Spertus noted some limitations of adopting a heuristic rule-based approach, e.g., the inability to deal with sarcasm, and vulnerability to errors in spelling, punctuation and grammar. Yin et al. Yin09detectionof developed a method for detecting online harassment. Working with the three data-harass datasets, they extracted local features (tf–idf weights of words), sentiment-based features (tf–idf weights of foul words and pronouns) and contextual features (e.g., similarity of a post to its neighboring posts) to train a linear support vector machine (svm) classifier. The authors concluded that important contextual indicators (such as harassment posts generally being off-topic) cannot be captured by local features alone. Their approach achieved INLINEFORM2 F INLINEFORM3 on the MySpace dataset, INLINEFORM4 F INLINEFORM5 on the Slashdot dataset, and INLINEFORM6 F INLINEFORM7 on the Kongregate dataset.",
"Razavi et al. razavi were the first to adopt lexicon-based abuse detection. They constructed an insulting and abusing language dictionary of words and phrases, where each entry had an associated weight indicating its abusive impact. They utilized semantic rules and features derived from the lexicon to build a three-level Naive Bayes classification system and apply it to a dataset of INLINEFORM0 messages ( INLINEFORM1 flame and the rest okay) extracted from the Usenet newsgroup and the Natural Semantic Module company's employee conversation thread ( INLINEFORM2 accuracy). Njagi et al. gitari also employed such a lexicon-based approach and, more recently, Wiegand et al. wiegand proposed an automated framework for generating such lexicons. While methods based on lexicons performed well on explicit abuse, the researchers noted their limitations on implicit abuse.",
"Bag-of-words (bow) features have been integral to several works on abuse detection. Sood et al. sood2012 showed that an svm trained on word bi-gram features outperformed a word-list baseline utilizing a Levenshtein distance-based heuristic for detecting profanity. Their best classifier (combination of SVMs and word-lists) yielded an F INLINEFORM0 of INLINEFORM1 . Warner and Hirschberg warner employed a template-based strategy alongside Brown clustering to extract surface-level bow features from a dataset of paragraphs annotated for antisemitism, and achieved an F INLINEFORM2 of INLINEFORM3 using svms. Their approach is unique in that they framed the task as a word-sense disambiguation problem, i.e., whether a term carried an anti-semitic sense or not. Other examples of bow-based methods are those of Dinakar et al. dinakar2011modeling, Burnap and Williams burnap and Van Hee et al. vanhee who use word n-grams in conjunction with other features, such as typed-dependency relations or scores based on sentiment lexicons, to train svms ( INLINEFORM4 F INLINEFORM5 on the data-bully dataset). Recenlty, Salminen et al. salminen2018anatomy showed that a linear SVM using tf–idf weighted n-grams achieves the best performance (average F INLINEFORM6 of INLINEFORM7 ) on classification of hateful comments (from a YouTube channel and Facebook page of an online news organization) as one of 29 different hate categories (e.g., accusation, promoting violence, humiliation, etc.).",
"Several researchers have directly incorporated features and identity traits of users in order to model the likeliness of abusive behavior from users with certain traits, a process known as user profiling. Dadvar et al. davdar included the age of users alongside other traditional lexicon-based features to detect cyber-bullying, while Galán-García et al. galan2016supervised utilized the time of publication, geo-position and language in the profile of Twitter users. Waseem and Hovy waseemhovy exploited gender of Twitter users alongside character n-gram counts to improve detection of sexism and racism in tweets from data-twitter-wh (F INLINEFORM0 increased from INLINEFORM1 to INLINEFORM2 ). Using the same setup, Unsvåg and Gambäck unsvaag2018effects showed that the inclusion of social network-based (i.e., number of followers and friends) and activity-based (i.e., number of status updates and favorites) information of users alongside their gender further enhances performance ( INLINEFORM3 gain in F INLINEFORM4 )."
],
[
"In this section, we review the approaches to abuse detection that utilize or rely solely on neural networks. We also include methods that use embeddings generated from a neural architecture within an otherwise non-neural framework.",
"",
"Distributed representations. Djuric et al. djuric were the first to adopt a neural approach to abuse detection. They utilized paragraph2vec BIBREF10 to obtain low-dimensional representations for comments in data-yahoo-fin-dj, and train a logistic regression (lr) classifier. Their model outperformed other classifiers trained on bow-based representations (auc INLINEFORM0 vs. INLINEFORM1 ). In their analysis, the authors noted that words and phrases in hate speech tend to be obfuscated, leading to high dimensionality and large sparsity of bow representations; classifiers trained on such representations often over-fit in training.",
"Building on the work of Djuric et al., Nobata et al. nobata evaluated the performance of a large range of features on the Yahoo! datasets (data-yahoo-*) using a regression model: (1) word and character n-grams; (2) linguistic features, e.g., number of polite/hate words and punctuation count; (3) syntactic features, e.g., parent and grandparent of node in a dependency tree; (4) distributional-semantic features, e.g., paragraph2vec comment representations. Although the best results were achieved with all features combined (F INLINEFORM0 INLINEFORM1 on data-yahoo-fin-a, INLINEFORM2 on data-yahoo-news-a), character n-grams on their own contributed significantly more than other features due to their robustness to noise (i.e., obfuscations, misspellings, unseen words). Experimenting with the data-yahoo-fin-dj dataset, Mehdad and Tetreault mehdad investigated whether character-level features are more indicative of abuse than word-level ones. Their results demonstrated the superiority of character-level features, showing that svm classifiers trained on Bayesian log-ratio vectors of average counts of character n-grams outperform the more intricate approach of Nobata et al. nobata in terms of AUC ( INLINEFORM3 vs. INLINEFORM4 ) as well as other rnn-based character and word-level models.",
"Samghabadi et al. W17-3010 utilized a similar set of features as Nobata et al. and augmented it with hand-engineered ones such as polarity scores derived from SentiWordNet, measures based on the LIWC program, and features based on emoticons. They then applied their method to three different datasets: data-wiki-att, a Kaggle dataset annotated for insult, and a dataset of questions and answers (each labeled as invective or neutral) that they created by crawling ask.fm. Distributional-semantic features combined with the aforementioned features constituted an effective feature space for the task ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 F INLINEFORM3 on data-wiki-att, Kaggle, ask.fm respectively). In line with the findings of Nobata et al. and Mehdad and Tetreault, character n-grams performed well on these datasets too.",
"",
"Deep learning in abuse detection. With the advent of deep learning, many researchers have explored its efficacy in abuse detection. Badjatiya et al. badjatiya evaluated several neural architectures on the data-twitter-wh dataset. Their best setup involved a two-step approach wherein they use a word-level long-short term memory (lstm) model, to tune glove or randomly-initialized word embeddings, and then train a gradient-boosted decision tree (gbdt) classifier on the average of the tuned embeddings in each tweet. They achieved the best results using randomly-initialized embeddings (weighted F INLINEFORM0 of INLINEFORM1 ). However, working with a similar setup, Mishra et al. mishra recently reported that glove initialization provided superior performance; a mismatch is attributed to the fact that Badjatiya et al. tuned the embeddings on the entire dataset (including the test set), hence allowing for the randomly-initialized ones to overfit.",
"Park and Fung parkfung utilized character and word-level cnns to classify comments in the dataset that they formed by combining data-twitter-w and data-twitter-wh. Their experiments demonstrated that combining the two levels of granularity using two input channels achieves the best results, outperforming a character n-gram lr baseline (weighted F INLINEFORM0 from INLINEFORM1 to INLINEFORM2 ). Several other works have also demonstrated the efficacy of cnns in detecting abusive social media posts BIBREF11 . Some researchers BIBREF12 , BIBREF13 have shown that sequentially combining cnns with gated recurrent unit (gru) rnns can enhance performance by taking advantage of properties of both architectures (e.g., 1-2% increase in F INLINEFORM3 compared to only using cnns).",
"Pavlopoulos et al. pavlopoulos,pavlopoulos-emnlp applied deep learning to the data-wiki-att, data-wiki-tox, and data-gazzetta datasets. Their most effective setups were: (1) a word-level gru followed by an lr layer; (2) setup 1 extended with an attention mechanism on words. Both setups outperformed a simple word-list baseline and the character n-gram lr classifier (detox) of Wulczyn et al. wulczyn. Setup 1 achieved the best performance on data-wiki-att and data-wiki-tox (auc INLINEFORM0 and INLINEFORM1 respectively), while setup 2 performed the best on data-gazzetta (auc INLINEFORM2 ). The attention mechanism was additionally able to highlight abusive words and phrases within the comments, exhibiting a high level of agreement with annotators on the task. Lee et al. W18-5113 worked with a subset of the data-twitter-f dataset and showed that a word-level bi-gru along with latent topic clustering (whereby topic information is extracted from the hidden states of the gru BIBREF14 ) yielded the best weighted F INLINEFORM3 ( INLINEFORM4 ).",
"The GermEval shared task on identification of offensive language in German tweets BIBREF8 saw submission of both deep learning and feature engineering approaches. The winning system BIBREF15 (macro F INLINEFORM0 of INLINEFORM1 ) employed multiple character and token n-gram classifiers, as well as distributional semantic features obtained by averaging word embeddings. The second best approach BIBREF16 (macro F INLINEFORM2 INLINEFORM3 ), on the other hand, employed an ensemble of cnns, the outputs of which were fed to a meta classifier for final prediction. Most of the remaining submissions BIBREF17 , BIBREF18 used deep learning with cnns and rnns alongside techniques such as transfer learning (e.g., via machine translation or joint representation learning for words across languages) from abuse-annotated datasets in other languages (mainly English). Wiegand et al. wiegand2018overview noted that simple deep learning approaches themselves were quite effective, and the addition of other techniques did not necessarily provide substantial improvements.",
"Kumar et al. kumar2018benchmarking noted similar trends in the shared task on aggression identification on data-facebook. The top approach on the task's English dataset BIBREF19 comprised rnns and cnns along with transfer learning via machine translation (macro F INLINEFORM0 of INLINEFORM1 ). The top approach for Hindi BIBREF20 utilized lexical features based on word and character n-grams (F INLINEFORM2 62.92%).",
"Recently, Aken et al. van2018challenges performed a systematic comparison of neural and non-neural approaches to toxic comment classification, finding that ensembles of the two were most effective.",
"",
"User profiling with neural networks. More recently, researchers have employed neural networks to extract features for users instead of manually leveraging ones like gender, location, etc. as discussed before. Working with the data-gazzetta dataset, Pavlopoulos et al. W17-4209 incorporated user embeddings into Pavlopoulos' setup 1 pavlopoulos,pavlopoulos-emnlp described above. They divided all the users whose comments are included in data-gazzetta into 4 types based on proportion of abusive comments (e.g., red users if INLINEFORM0 comments and INLINEFORM1 abusive comments), yellow (users with INLINEFORM2 comments and INLINEFORM3 abusive comments), green (users with INLINEFORM4 comments and INLINEFORM5 abusive comments), and unknown (users with INLINEFORM6 comments). They then assigned unique randomly-initialized embeddings to users and added them as additional input to the lr layer, alongside representations of comments obtained from the gru, increasing auc from INLINEFORM7 to INLINEFORM8 . Qian et al. N18-2019 used lstms for modeling inter and intra-user relationships on data-twitter-wh, with sexist and racist tweets combined into one category. The authors applied a bi-lstm to users' recent tweets in order to generate intra-user representations that capture their historic behavior. To improve robustness against noise present in tweets, they also used locality sensitive hashing to form sets semantically similar to user tweets. They then trained a policy network to select tweets from such sets that a bi-lstm could use to generate inter-user representations. When these inter and intra-user representations were utilized alongside representations of tweets from an lstm baseline, performance increased significantly (from INLINEFORM9 to INLINEFORM10 F INLINEFORM11 ).",
"Mishra et al. mishra constructed a community graph of all users whose tweets are included in the data-twitter-wh dataset. Nodes in the graph were users while edges the follower-following relationship between them on Twitter. They then applied node2vec BIBREF21 to this graph to generate user embeddings. Inclusion of these embeddings into character n-gram based baselines yielded state of the art results on data-twitter-wh (F INLINEFORM0 increased from INLINEFORM1 and INLINEFORM2 to INLINEFORM3 and INLINEFORM4 on the racism and sexism classes respectively). The gains were attributed to the fact that user embeddings captured not only information about online communities, but also some elements of the wider conversation amongst connected users in the graph. Ribeiro et al. ribeiro and Mishra et al. mishragcn applied graph neural networks BIBREF22 , BIBREF23 to social graphs in order to generate user embeddings (i.e., profiles) that capture not only their surrounding community but also their linguistic behavior."
],
[
"",
"Current trends. English has been the dominant language so far in terms of focus, followed by German, Hindi and Dutch. However, recent efforts have focused on compilation of datasets in other languages such as Slovene and Croatian BIBREF24 , Chinese BIBREF25 , Arabic BIBREF26 , and even some unconventional ones such as Hinglish BIBREF27 . Most of the research to date has been on racism, sexism, personal attacks, toxicity, and harassment. Other types of abuse such as obscenity, threats, insults, and grooming remain relatively unexplored. That said, we note that the majority of methods investigated to date and described herein are (in principle) applicable to a range of abuse types.",
"While the recent state of the art approaches rely on word-level cnns and rnns, they remain vulnerable to obfuscation of words BIBREF28 . Character n-gram, on the other hand, remain one of the most effective features for addressing obfuscation due to their robustness to spelling variations. Many researchers to date have exclusively relied on text based features for abuse detection. But recent works have shown that personal and community-based profiling features of users significantly enhance the state of the art.",
"",
"Ethical challenges. Whilst the research community has started incorporating features from user profiling, there has not yet been a discussion of ethical guidelines for doing so. To encourage such a discussion, we lay out four ethical considerations in the design of such approaches. First, the profiling approach should not compromise the privacy of the user. So a researcher might ask themselves such questions as: is the profiling based on identity traits of users (e.g., gender, race etc.) or solely on their online behavior? And is an appropriate generalization from (identifiable) user traits to population-level behavioural trends performed? Second, one needs to reflect on the possible bias in the training procedure: is it likely to induce a bias against users with certain traits? Third, the visibility aspect needs to be accounted for: is the profiling visible to the users, i.e., can users directly or indirectly observe how they (or others) have been profiled? And finally, one needs to carefully consider the purpose of such profiling: is it intended to take actions against users, or is it more benign (e.g. to better understand the content produced by them and make task-specific generalizations)? While we do not intend to provide answers to these questions within this survey, we hope that the above considerations can help to start a debate on these important issues.",
"",
"Labeling abuse. Labeling experiences as abusive provides powerful validation for victims of abuse and enables observers to grasp the scope of the problem. It also creates new descriptive norms (suggesting what types of behavior constitute abuse) and exposes existing norms and expectations around appropriate behavior. On the other hand, automated systems can invalidate abusive experiences, particularly for victims whose experiences do not lie within the realm of `typical' experiences BIBREF29 . This points to a critical issue: automated systems embody the morals and values of their creators and annotators BIBREF30 , BIBREF29 . It is therefore imperative that we design systems that overcome such issues. For e.g., some recent works have investigated ways to mitigate gender bias in models BIBREF31 , BIBREF32 .",
"",
"Abuse over time and across domains. New abusive words and phrases continue to enter the language BIBREF33 . This suggests that abuse is a constantly changing phenomenon. Working with the data-yahoo-*-b datasets, Nobata et al. nobata found that a classifier trained on more recent data outperforms one trained on older data. They noted that a prominent factor in this is the continuous evolution of the Internet jargon. We would like to add that, given the situational and topical nature of abuse BIBREF1 , contextual features learned by detection methods may become irrelevant over time.",
"A similar trend also holds for abuse detection across domains. Wiegand et al. wiegand showed that the performance of state of the art classifiers BIBREF34 , BIBREF35 decreases substantially when tested on data drawn from domains different to those in the training set. Wiegand et al. attributed the trend to lack of domain-specific learning. Chandrasekharan et al. chandrasekharan2017bag propose an approach that utilizes similarity scores between posts to improve in-domain performance based on out-of-domain data. Possible solutions for improving cross-domain abuse detection can be found in the literature of (adversarial) multi-task learning and domain adaptation BIBREF36 , BIBREF37 , BIBREF38 , and also in works such as that of Sharifirad et al. jafarpour2018boosting who utilize knowledge graphs to augment the training of a sexist tweet classifier. Recently, Waseem et al. waseem2018bridging and Karan and Šnajder karan2018cross exploited multi-task learning frameworks to train models that are robust across data from different distributions and data annotated under different guidelines.",
"",
"Modeling wider conversation. Abuse is inherently contextual; it can only be interpreted as part of a wider conversation between users on the Internet. This means that individual comments can be difficult to classify without modeling their respective contexts. However, the vast majority of existing approaches have focused on modeling the lexical, semantic and syntactic properties of comments in isolation from other comments. Mishra et al. mishra have pointed out that some tweets in data-twitter-wh do not contain sufficient lexical or semantic information to detect abuse even in principle, e.g., @user: Logic in the world of Islam http://t.co/xxxxxxx, and techniques for modeling discourse and elements of pragmatics are needed. To address this issue, Gao and Huang gao2017detecting, working with data-fox-news, incorporate features from two sources of context: the title of the news article for which the comment was posted, and the screen name of the user who posted it. Yet this is only a first step towards modeling the wider context in abuse detection; more sophisticated techniques are needed to capture the history of the conversation and the behavior of the users as it develops over time. NLP techniques for modeling discourse and dialogue can be a good starting point in this line of research. However, since posts on social media often includes data of multiple modalities (e.g., a combination of images and text), abuse detection systems would also need to incorporate a multi-modal component.",
"",
"Figurative language. Figurative devices such as metaphor and sarcasm are common in natural language. They tend to be used to express emotions and sentiments that go beyond the literal meaning of words and phrases BIBREF39 . Nobata et al. nobata (among others, e.g., Aken et al. van2018challenges) noted that sarcastic comments are hard for abuse detection methods to deal with since surface features are not sufficient; typically the knowledge of the context or background of the user is also required. Mishra mishrathesis found that metaphors are more frequent in abusive samples as opposed to non-abusive ones. However, to fully understand the impact of figurative devices on abuse detection, datasets with more pronounced presence of these are required.",
"",
"Explainable abuse detection. Explainability has become an important aspect within NLP, and within AI generally. Yet there has been no discussion of this issue in the context of abuse detection systems. We hereby propose three properties that an explainable abuse detection system should aim to exhibit. First, it needs to establish intent of abuse (or the lack of it) and provide evidence for it, hence convincingly segregating abuse from other phenomena such as sarcasm and humour. Second, it needs to capture abusive language, i.e., highlight instances of abuse if present, be they explicit (i.e., use of expletives) or implicit (e.g., dehumanizing comparisons). Third, it needs to identify the target(s) of abuse (or the absence thereof), be it an individual or a group. These properties align well with the categorizations of abuse we discussed in the introduction. They also aptly motivate the advances needed in the field: (1) developments in areas such as sarcasm detection and user profiling for precise segregation of abusive intent from humor, satire, etc.; (2) better identification of implicit abuse, which requires improvements in modeling of figurative language; (3) effective detection of generalized abuse and inference of target(s), which require advances in areas such as domain adaptation and conversation modeling."
],
[
"Online abuse stands as a significant challenge before society. Its nature and characteristics constantly evolve, making it a complex phenomenon to study and model. Automated abuse detection methods have seen a lot of development in recent years: from simple rule-based methods aimed at identifying directed, explicit abuse to sophisticated methods that can capture rich semantic information and even aspects of user behavior. By comprehensively reviewing the investigated methods to date, our survey aims to provide a platform for future research, facilitating progress in this important area. While we see an array of challenges that lie ahead, e.g., modeling extra-propositional aspects of language, user behavior and wider conversation, we believe that recent progress in the areas of semantics, dialogue modeling and social media analysis put the research community in a strong position to address them. Summaries of public datasets In table TABREF4 , we summarize the datasets described in this paper that are publicly available and provide links to them. A discussion of metrics The performance results we have reported highlight that, throughout work on abuse detection, different researchers have utilized different evaluation metrics for their experiments – from area under the receiver operating characteristic curve (auroc) BIBREF79 , BIBREF48 to micro and macro F INLINEFORM0 BIBREF28 – regardless of the properties of their datasets. This makes the presented techniques more difficult to compare. In addition, as abuse is a relatively infrequent phenomenon, the datasets are typically skewed towards non-abusive samples BIBREF6 . Metrics such as auroc may, therefore, be unsuitable since they may mask poor performance on the abusive samples as a side-effect of the large number of non-abusive samples BIBREF52 . Macro-averaged precision, recall, and F INLINEFORM1 , as well as precision, recall, and F INLINEFORM2 on specifically the abusive classes, may provide a more informative evaluation strategy; the primary advantage being that macro-averaged metrics provide a sense of effectiveness on the minority classes BIBREF73 . Additionally, area under the precision-recall curve (auprc) might be a better alternative to auroc in imbalanced scenarios BIBREF46 . "
]
],
"section_name": [
"Introduction",
"Annotated datasets",
"Feature engineering based approaches",
"Neural network based approaches",
"Discussion",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"0b8d65b6f3620450d23af975c606291cbf460fc4",
"7f3a71d15630e54953c8fbf719d80b6e72878727"
],
"answer": [
{
"evidence": [
"Razavi et al. razavi were the first to adopt lexicon-based abuse detection. They constructed an insulting and abusing language dictionary of words and phrases, where each entry had an associated weight indicating its abusive impact. They utilized semantic rules and features derived from the lexicon to build a three-level Naive Bayes classification system and apply it to a dataset of INLINEFORM0 messages ( INLINEFORM1 flame and the rest okay) extracted from the Usenet newsgroup and the Natural Semantic Module company's employee conversation thread ( INLINEFORM2 accuracy). Njagi et al. gitari also employed such a lexicon-based approach and, more recently, Wiegand et al. wiegand proposed an automated framework for generating such lexicons. While methods based on lexicons performed well on explicit abuse, the researchers noted their limitations on implicit abuse.",
"Bag-of-words (bow) features have been integral to several works on abuse detection. Sood et al. sood2012 showed that an svm trained on word bi-gram features outperformed a word-list baseline utilizing a Levenshtein distance-based heuristic for detecting profanity. Their best classifier (combination of SVMs and word-lists) yielded an F INLINEFORM0 of INLINEFORM1 . Warner and Hirschberg warner employed a template-based strategy alongside Brown clustering to extract surface-level bow features from a dataset of paragraphs annotated for antisemitism, and achieved an F INLINEFORM2 of INLINEFORM3 using svms. Their approach is unique in that they framed the task as a word-sense disambiguation problem, i.e., whether a term carried an anti-semitic sense or not. Other examples of bow-based methods are those of Dinakar et al. dinakar2011modeling, Burnap and Williams burnap and Van Hee et al. vanhee who use word n-grams in conjunction with other features, such as typed-dependency relations or scores based on sentiment lexicons, to train svms ( INLINEFORM4 F INLINEFORM5 on the data-bully dataset). Recenlty, Salminen et al. salminen2018anatomy showed that a linear SVM using tf–idf weighted n-grams achieves the best performance (average F INLINEFORM6 of INLINEFORM7 ) on classification of hateful comments (from a YouTube channel and Facebook page of an online news organization) as one of 29 different hate categories (e.g., accusation, promoting violence, humiliation, etc.).",
"Several researchers have directly incorporated features and identity traits of users in order to model the likeliness of abusive behavior from users with certain traits, a process known as user profiling. Dadvar et al. davdar included the age of users alongside other traditional lexicon-based features to detect cyber-bullying, while Galán-García et al. galan2016supervised utilized the time of publication, geo-position and language in the profile of Twitter users. Waseem and Hovy waseemhovy exploited gender of Twitter users alongside character n-gram counts to improve detection of sexism and racism in tweets from data-twitter-wh (F INLINEFORM0 increased from INLINEFORM1 to INLINEFORM2 ). Using the same setup, Unsvåg and Gambäck unsvaag2018effects showed that the inclusion of social network-based (i.e., number of followers and friends) and activity-based (i.e., number of status updates and favorites) information of users alongside their gender further enhances performance ( INLINEFORM3 gain in F INLINEFORM4 ).",
"Building on the work of Djuric et al., Nobata et al. nobata evaluated the performance of a large range of features on the Yahoo! datasets (data-yahoo-*) using a regression model: (1) word and character n-grams; (2) linguistic features, e.g., number of polite/hate words and punctuation count; (3) syntactic features, e.g., parent and grandparent of node in a dependency tree; (4) distributional-semantic features, e.g., paragraph2vec comment representations. Although the best results were achieved with all features combined (F INLINEFORM0 INLINEFORM1 on data-yahoo-fin-a, INLINEFORM2 on data-yahoo-news-a), character n-grams on their own contributed significantly more than other features due to their robustness to noise (i.e., obfuscations, misspellings, unseen words). Experimenting with the data-yahoo-fin-dj dataset, Mehdad and Tetreault mehdad investigated whether character-level features are more indicative of abuse than word-level ones. Their results demonstrated the superiority of character-level features, showing that svm classifiers trained on Bayesian log-ratio vectors of average counts of character n-grams outperform the more intricate approach of Nobata et al. nobata in terms of AUC ( INLINEFORM3 vs. INLINEFORM4 ) as well as other rnn-based character and word-level models.",
"Samghabadi et al. W17-3010 utilized a similar set of features as Nobata et al. and augmented it with hand-engineered ones such as polarity scores derived from SentiWordNet, measures based on the LIWC program, and features based on emoticons. They then applied their method to three different datasets: data-wiki-att, a Kaggle dataset annotated for insult, and a dataset of questions and answers (each labeled as invective or neutral) that they created by crawling ask.fm. Distributional-semantic features combined with the aforementioned features constituted an effective feature space for the task ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 F INLINEFORM3 on data-wiki-att, Kaggle, ask.fm respectively). In line with the findings of Nobata et al. and Mehdad and Tetreault, character n-grams performed well on these datasets too."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Razavi et al. razavi were the first to adopt lexicon-based abuse detection. They constructed an insulting and abusing language dictionary of words and phrases, where each entry had an associated weight indicating its abusive impact.",
"While methods based on lexicons performed well on explicit abuse, the researchers noted their limitations on implicit abuse.",
"Bag-of-words (bow) features have been integral to several works on abuse detection. Sood et al. sood2012 showed that an svm trained on word bi-gram features outperformed a word-list baseline utilizing a Levenshtein distance-based heuristic for detecting profanity.",
"Several researchers have directly incorporated features and identity traits of users in order to model the likeliness of abusive behavior from users with certain traits, a process known as user profiling. Dadvar et al. davdar included the age of users alongside other traditional lexicon-based features to detect cyber-bullying, while Galán-García et al. galan2016supervised utilized the time of publication, geo-position and language in the profile of Twitter users. Waseem and Hovy waseemhovy exploited gender of Twitter users alongside character n-gram counts to improve detection of sexism and racism in tweets from data-twitter-wh (F INLINEFORM0 increased from INLINEFORM1 to INLINEFORM2 ). Using the same setup, Unsvåg and Gambäck unsvaag2018effects showed that the inclusion of social network-based (i.e., number of followers and friends) and activity-based (i.e., number of status updates and favorites) information of users alongside their gender further enhances performance ( INLINEFORM3 gain in F INLINEFORM4 ).",
"Building on the work of Djuric et al., Nobata et al. nobata evaluated the performance of a large range of features on the Yahoo! datasets (data-yahoo-*) using a regression model: (1) word and character n-grams; (2) linguistic features, e.g., number of polite/hate words and punctuation count; (3) syntactic features, e.g., parent and grandparent of node in a dependency tree; (4) distributional-semantic features, e.g., paragraph2vec comment representations. Although the best results were achieved with all features combined (F INLINEFORM0 INLINEFORM1 on data-yahoo-fin-a, INLINEFORM2 on data-yahoo-news-a), character n-grams on their own contributed significantly more than other features due to their robustness to noise (i.e., obfuscations, misspellings, unseen words). Experimenting with the data-yahoo-fin-dj dataset, Mehdad and Tetreault mehdad investigated whether character-level features are more indicative of abuse than word-level ones. Their results demonstrated the superiority of character-level features, showing that svm classifiers trained on Bayesian log-ratio vectors of average counts of character n-grams outperform the more intricate approach of Nobata et al. nobata in terms of AUC ( INLINEFORM3 vs. INLINEFORM4 ) as well as other rnn-based character and word-level models.",
"Samghabadi et al. W17-3010 utilized a similar set of features as Nobata et al. and augmented it with hand-engineered ones such as polarity scores derived from SentiWordNet, measures based on the LIWC program, and features based on emoticons. They then applied their method to three different datasets: data-wiki-att, a Kaggle dataset annotated for insult, and a dataset of questions and answers (each labeled as invective or neutral) that they created by crawling ask.fm. Distributional-semantic features combined with the aforementioned features constituted an effective feature space for the task ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 F INLINEFORM3 on data-wiki-att, Kaggle, ask.fm respectively). In line with the findings of Nobata et al. and Mehdad and Tetreault, character n-grams performed well on these datasets too."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Bag-of-words (bow) features have been integral to several works on abuse detection. Sood et al. sood2012 showed that an svm trained on word bi-gram features outperformed a word-list baseline utilizing a Levenshtein distance-based heuristic for detecting profanity. Their best classifier (combination of SVMs and word-lists) yielded an F INLINEFORM0 of INLINEFORM1 . Warner and Hirschberg warner employed a template-based strategy alongside Brown clustering to extract surface-level bow features from a dataset of paragraphs annotated for antisemitism, and achieved an F INLINEFORM2 of INLINEFORM3 using svms. Their approach is unique in that they framed the task as a word-sense disambiguation problem, i.e., whether a term carried an anti-semitic sense or not. Other examples of bow-based methods are those of Dinakar et al. dinakar2011modeling, Burnap and Williams burnap and Van Hee et al. vanhee who use word n-grams in conjunction with other features, such as typed-dependency relations or scores based on sentiment lexicons, to train svms ( INLINEFORM4 F INLINEFORM5 on the data-bully dataset). Recenlty, Salminen et al. salminen2018anatomy showed that a linear SVM using tf–idf weighted n-grams achieves the best performance (average F INLINEFORM6 of INLINEFORM7 ) on classification of hateful comments (from a YouTube channel and Facebook page of an online news organization) as one of 29 different hate categories (e.g., accusation, promoting violence, humiliation, etc.)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Bag-of-words (bow) features have been integral to several works on abuse detection. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"52fe54be05885481ad5f2f29f2d689227ae00ca6",
"a54b8346523fda27e414a3f696d910686cdcc466"
],
"answer": [
{
"evidence": [
"Mishra et al. mishra constructed a community graph of all users whose tweets are included in the data-twitter-wh dataset. Nodes in the graph were users while edges the follower-following relationship between them on Twitter. They then applied node2vec BIBREF21 to this graph to generate user embeddings. Inclusion of these embeddings into character n-gram based baselines yielded state of the art results on data-twitter-wh (F INLINEFORM0 increased from INLINEFORM1 and INLINEFORM2 to INLINEFORM3 and INLINEFORM4 on the racism and sexism classes respectively). The gains were attributed to the fact that user embeddings captured not only information about online communities, but also some elements of the wider conversation amongst connected users in the graph. Ribeiro et al. ribeiro and Mishra et al. mishragcn applied graph neural networks BIBREF22 , BIBREF23 to social graphs in order to generate user embeddings (i.e., profiles) that capture not only their surrounding community but also their linguistic behavior."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Mishra et al. mishra constructed a community graph of all users whose tweets are included in the data-twitter-wh dataset. Nodes in the graph were users while edges the follower-following relationship between them on Twitter. They then applied node2vec BIBREF21 to this graph to generate user embeddings. Inclusion of these embeddings into character n-gram based baselines yielded state of the art results on data-twitter-wh (F INLINEFORM0 increased from INLINEFORM1 and INLINEFORM2 to INLINEFORM3 and INLINEFORM4 on the racism and sexism classes respectively). "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"561501b121b1f4fdef44a60f5afe18e27e2ebc83",
"e7f3d082361b30af6eb64be419227369f841f78b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Links and summaries of datasets mentioned in the paper that are publicly available."
],
"extractive_spans": [],
"free_form_answer": "DATA-TWITTER-WH, DATA-TWITTER-W, DATA-TWITTER-DAVID, DATA-TWITTER-F, DATA-WIKI-ATT, DATA-WIKI-AGG, DATA-WIKI-TOX, DATA-FOX-NEWS, DATA-GAZZETTA, DATA-FACEBOOK, Arabic News, GermEval, Ask.fm.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Links and summaries of datasets mentioned in the paper that are publicly available."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Links and summaries of datasets mentioned in the paper that are publicly available."
],
"extractive_spans": [],
"free_form_answer": "DATA-TWITTER-WH, DATA-TWITTER-W, DATA-TWITTER-DAVID, DATA-TWITTER-F, DATA-WIKI-ATT, DATA-WIKI-AGG, DATA-WIKI-TOX, DATA-FOX-NEWS, DATA-GAZZETTA, DATA-FACEBOOK, Arabic News, GermEval, Ask.fun",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Links and summaries of datasets mentioned in the paper that are publicly available."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8c8d8950cb27577940b9141787e60c93a2f24917",
"afbae43ac4108988017d0aa7c0d03892d2a0cdca"
],
"answer": [
{
"evidence": [
"That said, the notion of abuse has proven elusive and difficult to formalize. Different norms across (online) communities can affect what is considered abusive BIBREF1 . In the context of natural language, abuse is a term that encompasses many different types of fine-grained negative expressions. For example, Nobata et al. nobata use it to collectively refer to hate speech, derogatory language and profanity, while Mishra et al. mishra use it to discuss racism and sexism. The definitions for different types of abuse tend to be overlapping and ambiguous. However, regardless of the specific type, we define abuse as any expression that is meant to denigrate or offend a particular person or group. Taking a course-grained view, Waseem et al. W17-3012 classify abuse into broad categories based on explicitness and directness. Explicit abuse comes in the form of expletives, derogatory words or threats, while implicit abuse has a more subtle appearance characterized by the presence of ambiguous terms and figures of speech such as metaphor or sarcasm. Directed abuse targets a particular individual as opposed to generalized abuse, which is aimed at a larger group such as a particular gender or ethnicity. This categorization exposes some of the intricacies that lie within the task of automated abuse detection. While directed and explicit abuse is relatively straightforward to detect for humans and machines alike, the same is not true for implicit or generalized abuse. This is illustrated in the works of Dadvar et al. davdar and Waseem and Hovy waseemhovy: Dadvar et al. observed an inter-annotator agreement of INLINEFORM0 on their cyber-bullying dataset. Cyber-bullying is a classic example of directed and explicit abuse since there is typically a single target who is harassed with personal attacks. On the other hand, Waseem and Hovy noted that INLINEFORM1 of all the disagreements in annotation of their dataset occurred on the sexism class. Sexism is typically both generalized and implicit."
],
"extractive_spans": [
"we define abuse as any expression that is meant to denigrate or offend a particular person or group."
],
"free_form_answer": "",
"highlighted_evidence": [
"However, regardless of the specific type, we define abuse as any expression that is meant to denigrate or offend a particular person or group."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"That said, the notion of abuse has proven elusive and difficult to formalize. Different norms across (online) communities can affect what is considered abusive BIBREF1 . In the context of natural language, abuse is a term that encompasses many different types of fine-grained negative expressions. For example, Nobata et al. nobata use it to collectively refer to hate speech, derogatory language and profanity, while Mishra et al. mishra use it to discuss racism and sexism. The definitions for different types of abuse tend to be overlapping and ambiguous. However, regardless of the specific type, we define abuse as any expression that is meant to denigrate or offend a particular person or group. Taking a course-grained view, Waseem et al. W17-3012 classify abuse into broad categories based on explicitness and directness. Explicit abuse comes in the form of expletives, derogatory words or threats, while implicit abuse has a more subtle appearance characterized by the presence of ambiguous terms and figures of speech such as metaphor or sarcasm. Directed abuse targets a particular individual as opposed to generalized abuse, which is aimed at a larger group such as a particular gender or ethnicity. This categorization exposes some of the intricacies that lie within the task of automated abuse detection. While directed and explicit abuse is relatively straightforward to detect for humans and machines alike, the same is not true for implicit or generalized abuse. This is illustrated in the works of Dadvar et al. davdar and Waseem and Hovy waseemhovy: Dadvar et al. observed an inter-annotator agreement of INLINEFORM0 on their cyber-bullying dataset. Cyber-bullying is a classic example of directed and explicit abuse since there is typically a single target who is harassed with personal attacks. On the other hand, Waseem and Hovy noted that INLINEFORM1 of all the disagreements in annotation of their dataset occurred on the sexism class. Sexism is typically both generalized and implicit."
],
"extractive_spans": [
"we define abuse as any expression that is meant to denigrate or offend a particular person or group."
],
"free_form_answer": "",
"highlighted_evidence": [
"The definitions for different types of abuse tend to be overlapping and ambiguous. However, regardless of the specific type, we define abuse as any expression that is meant to denigrate or offend a particular person or group. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Did the survey provide insight into features commonly found to be predictive of abusive content on online platforms?",
"Is deep learning the state-of-the-art method in automated abuse detection",
"What datasets were used in this work?",
"How is abuse defined for the purposes of this research?"
],
"question_id": [
"344238de7208902f7b3a46819cc6d83cc37448a0",
"56bbca3fe24c2e9384cc57f55f35f7f5ad5c5716",
"4c40fa01f626def0b69d1cb7bf9181b574ff6382",
"71b29ab3ddcdd11dcc63b0bb55e75914c07a2217"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Links and summaries of datasets mentioned in the paper that are publicly available."
],
"file": [
"13-Table1-1.png"
]
} | [
"What datasets were used in this work?"
] | [
[
"1908.06024-13-Table1-1.png"
]
] | [
"DATA-TWITTER-WH, DATA-TWITTER-W, DATA-TWITTER-DAVID, DATA-TWITTER-F, DATA-WIKI-ATT, DATA-WIKI-AGG, DATA-WIKI-TOX, DATA-FOX-NEWS, DATA-GAZZETTA, DATA-FACEBOOK, Arabic News, GermEval, Ask.fun"
] | 291 |
2004.04498 | Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem | Training data for NLP tasks often exhibits gender bias in that fewer sentences refer to women than to men. In Neural Machine Translation (NMT) gender bias has been shown to reduce translation quality, particularly when the target language has grammatical gender. The recent WinoMT challenge set allows us to measure this effect directly (Stanovsky et al, 2019). Ideally we would reduce system bias by simply debiasing all data prior to training, but achieving this effectively is itself a challenge. Rather than attempt to create a `balanced' dataset, we use transfer learning on a small set of trusted, gender-balanced examples. This approach gives strong and consistent improvements in gender debiasing with much less computational cost than training from scratch. A known pitfall of transfer learning on new domains is `catastrophic forgetting', which we address both in adaptation and in inference. During adaptation we show that Elastic Weight Consolidation allows a performance trade-off between general translation quality and bias reduction. During inference we propose a lattice-rescoring scheme which outperforms all systems evaluated in Stanovsky et al (2019) on WinoMT with no degradation of general test set BLEU, and we show this scheme can be applied to remove gender bias in the output of `black box` online commercial MT systems. We demonstrate our approach translating from English into three languages with varied linguistic properties and data availability. | {
"paragraphs": [
[
"As language processing tools become more prevalent concern has grown over their susceptibility to social biases and their potential to propagate bias BIBREF1, BIBREF2. Natural language training data inevitably reflects biases present in our society. For example, gender bias manifests itself in training data which features more examples of men than of women. Tools trained on such data will then exhibit or even amplify the biases BIBREF3.",
"Gender bias is a particularly important problem for Neural Machine Translation (NMT) into gender-inflected languages. An over-prevalence of some gendered forms in the training data leads to translations with identifiable errors BIBREF0. Translations are better for sentences involving men and for sentences containing stereotypical gender roles. For example, mentions of male doctors are more reliably translated than those of male nurses BIBREF2, BIBREF4.",
"Recent approaches to the bias problem in NLP have involved training from scratch on artificially gender-balanced versions of the original dataset BIBREF5, BIBREF6 or with de-biased embeddings BIBREF7, BIBREF8. While these approaches may be effective, training from scratch is inefficient and gender-balancing embeddings or large parallel datasets are challenging problems BIBREF9.",
"Instead we propose treating gender debiasing as a domain adaptation problem, since NMT models can very quickly adapt to a new domain BIBREF10. To the best of our knowledge this work is the first to attempt NMT bias reduction by fine-tuning, rather than retraining. We consider three aspects of this adaptation problem: creating less biased adaptation data, parameter adaptation using this data, and inference with the debiased models produced by adaptation.",
"Regarding data, we suggest that a small, trusted gender-balanced set could allow more efficient and effective gender debiasing than a larger, noisier set. To explore this we create a tiny, handcrafted profession-based dataset for transfer learning. For contrast, we also consider fine-tuning on a counterfactual subset of the full dataset and propose a straightforward scheme for artificially gender-balancing parallel text for NMT.",
"We find that during domain adaptation improvement on the gender-debiased domain comes at the expense of translation quality due to catastrophic forgetting BIBREF11. We can balance improvement and forgetting with a regularised training procedure, Elastic Weight Consolidation (EWC), or in inference by a two-step lattice rescoring procedure.",
"We experiment with three language pairs, assessing the impact of debiasing on general domain BLEU and on the WinoMT challenge set BIBREF0. We find that continued training on the handcrafted set gives far stronger and more consistent improvements in gender-debiasing with orders of magnitude less training time, although as expected general translation performance as measured by BLEU decreases.",
"We further show that regularised adaptation with EWC can reduce bias while limiting degradation in general translation quality. We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model. We believe this approach, rescoring with models targeted to remove bias, is novel in NMT. The rescoring procedure improves WinoMT accuracy by up to 30% with no decrease in BLEU on the general test set.",
"Recent recommendations for ethics in Artificial Intelligence have suggested that social biases or imbalances in a dataset be addressed prior to model training BIBREF12. This recommendation presupposes that the source of bias in a dataset is both obvious and easily adjusted. We show that debiasing a full NMT dataset is difficult, and suggest alternative efficient and effective approaches for debiasing a model after it is trained. This avoids the need to identify and remove all possible biases prior to training, and has the added benefit of preserving privacy, since no access to the original data or knowledge of its contents is required. As evidence, in section SECREF43, we show this scheme can be applied to remove gender bias in the output of ‘black box‘ online commercial MT systems."
],
[
"BIBREF13 treat gender as a domain for machine translation, training from scratch by augmenting Europarl data with a tag indicating the speaker's gender. This does not inherently remove gender bias from the system but allows control over the translation hypothesis gender. BIBREF14 similarly prepend a short phrase at inference time which acts as a gender domain label for the entire sentence. These approaches are not directly applicable to text which may have more than one gendered entity per sentence, as in coreference resolution tasks.",
"BIBREF7 train NMT models from scratch with debiased word embeddings. They demonstrate improved performance on an English-Spanish occupations task with a single profession and pronoun per sentence. We assess our fine-tuning approaches on the WinoMT coreference set, with two entities to resolve per sentence.",
"For monolingual NLP tasks a typical approach is gender debiasing using counterfactual data augmentation where for each gendered sentence in the data a gender-swapped equivalent is added. BIBREF5 show improvement in coreference resolution for English using counterfactual data. BIBREF6 demonstrate a more complicated scheme for gender-inflected languages. However, their system focuses on words in isolation, and is difficult to apply to co-reference and conjunction situations with more than one term to swap, reducing its practicality for large MT datasets.",
"Recent work recognizes that NMT can be adapted to domains with desired attributes using small datasets BIBREF15, BIBREF16. Our choice of a small, trusted dataset for adaptation specifically to a debiased domain connects to recent work in data selection by BIBREF17, in which fine-tuning on less noisy data improves translation performance. Similarly we propose fine-tuning on less biased data to reduce gender bias in translations. This is loosely the inverse of the approach described by BIBREF18 for monolingual abusive language detection, which pre-trains on a larger, less biased set."
],
[
"We focus on translating coreference sentences containing professions as a representative subset of the gender bias problem. This follows much recent work on NLP gender bias BIBREF19, BIBREF5, BIBREF6 including the release of WinoMT, a relevant challenge set for NMT BIBREF0.",
"A sentence that highlights gender bias is:",
"The doctor told the nurse that she had been busy.",
"A human translator carrying out coreference resolution would infer that `she' refers to the doctor, and correctly translate the entity to German as Die Ärztin. An NMT model trained on a biased dataset in which most doctors are male might incorrectly default to the masculine form, Der Arzt.",
"Data bias does not just affect translations of the stereotyped roles. Since NMT inference is usually left-to-right, a mistranslation can lead to further, more obvious mistakes later in the translation. For example, our baseline en-de system translates the English sentence",
"The cleaner hates the developer because she always leaves the room dirty.",
"to the German",
"Der Reiniger haßt den Entwickler, weil er den Raum immer schmutzig lässt.",
"Here not only is `developer' mistranslated as the masculine den Entwickler instead of the feminine die Entwicklerin, but an unambiguous pronoun translation later in the sentence is incorrect: er (`he') is produced instead of sie (`she').",
"In practice, not all translations with gender-inflected words can be unambiguously resolved. A simple example is:",
"The doctor had been busy.",
"This would likely be translated with a masculine entity according to the conventions of a language, unless extra-sentential context was available. As well, some languages have adopted gender-neutral singular pronouns and profession terms, both to include non-binary people and to avoid the social biases of gendered language BIBREF20, although most languages lack widely-accepted conventions BIBREF21. This paper addresses gender bias that can be resolved at the sentence level and evaluated with existing test sets, and does not address these broader challenges."
],
[
"WinoMT BIBREF0 is a recently proposed challenge set for gender bias in NMT. Moreover it is the only significant challenge set we are aware of to evaluate translation gender bias comparably across several language pairs. It permits automatic bias evaluation for translation from English to eight target languages with grammatical gender. The source side of WinoMT is 3888 concatenated sentences from Winogender BIBREF19 and WinoBias BIBREF5. These are coreference resolution datasets in which each sentence contains a primary entity which is co-referent with a pronoun – the doctor in the first example above and the developer in the second – and a secondary entity – the nurse and the cleaner respectively.",
"WinoMT evaluation extracts the grammatical gender of the primary entity from each translation hypothesis by automatic word alignment followed by morphological analysis. WinoMT then compares the translated primary entity with the gold gender, with the objective being a correctly gendered translation. The authors emphasise the following metrics over the challenge set:",
"Accuracy – percentage of hypotheses with the correctly gendered primary entity.",
"$\\mathbf {\\Delta G}$ – difference in $F_1$ score between the set of sentences with masculine entities and the set with feminine entities.",
"$\\mathbf {\\Delta S}$ – difference in accuracy between the set of sentences with pro-stereotypical (`pro') entities and those with anti-stereotypical (`anti') entities, as determined by BIBREF5 using US labour statistics. For example, the `pro' set contains male doctors and female nurses, while `anti' contains female doctors and male nurses.",
"Our main objective is increasing accuracy. We also report on $\\Delta G$ and $\\Delta S$ for ease of comparison to previous work. Ideally the absolute values of $\\Delta G$ and $\\Delta S$ should be close to 0. A high positive $\\Delta G$ indicates that a model translates male entities better, while a high positive $\\Delta S$ indicates that a model stereotypes male and female entities. Large negative values for $\\Delta G$ and $\\Delta S$, indicating a bias towards female or anti-stereotypical translation, are as undesirable as large positive values.",
"We note that $\\Delta S$ can be significantly skewed by very biased systems. A model that generates male forms for almost all test sentences, stereotypical roles or not, will have an extremely low $\\Delta S$, since its pro- and anti-stereotypical class accuracy will both be about 50%. Consequently we also report:",
"M:F – ratio of hypotheses with male predictions to those with female predictions.",
"Ideally this should be close to 1.0, since the WinoMT challenge set is gender-balanced. While M:F correlates strongly with $\\Delta G$, we consider M:F easier to interpret, particularly since very high or low M:F reduce the relevance of $\\Delta S$.",
"Finally, we wish to reduce gender bias without reducing translation performance. We report BLEU BIBREF22 on separate, general test sets for each language pair. WinoMT is designed to work without target language references, and so it is not possible to measure translation performance on this set by measures such as BLEU."
],
[
"Our hypothesis is that the absence of gender bias can be treated as a small domain for the purposes of NMT model adaptation. In this case a well-formed small dataset may give better results than attempts at debiasing the entire original dataset.",
"We therefore construct a tiny, trivial set of gender-balanced English sentences which we can easily translate into each target language. The sentences follow the template:",
"The $[$PROFESSION$]$ finished $[$his$|$her$]$ work.",
"We refer to this as the handcrafted set. Each profession is from the list collected by BIBREF4 from US labour statistics. We simplify this list by removing field-specific adjectives. For example, we have a single profession `engineer', as opposed to specifying industrial engineer, locomotive engineer, etc. In total we select 194 professions, giving just 388 sentences in a gender-balanced set.",
"With manually translated masculine and feminine templates, we simply translate the masculine and feminine forms of each listed profession for each target language. In practice this translation is via an MT first-pass for speed, followed by manual checking, but given available lexicons this could be further automated. We note that the handcrafted sets contain no examples of coreference resolution and very little variety in terms of grammatical gender. A set of more complex sentences targeted at the coreference task might further improve WinoMT scores, but would be more difficult to produce for new languages.",
"We wish to distinguish between a model which improves gender translation, and one which improves its WinoMT scores simply by learning the vocabulary for previously unseen or uncommon professions. We therefore create a handcrafted no-overlap set, removing source sentences with professions occurring in WinoMT to leave 216 sentences. We increase this set back to 388 examples with balanced adjective-based sentences in the same pattern, e.g. The tall $[$man$|$woman$]$ finished $[$his$|$her$]$ work."
],
[
"For contrast, we fine-tune on an approximated counterfactual dataset. Counterfactual data augmentation is an intuitive solution to bias from data over-representation BIBREF23. It involves identifying the subset of sentences containing bias – in this case gendered terms – and, for each one, adding an equivalent sentence with the bias reversed – in this case a gender-swapped version.",
"While counterfactual data augmentation is relatively simple for sentences in English, the process for inflected languages is challenging, involving identifying and updating words that are co-referent with all gendered entities in a sentence. Gender-swapping MT training data additionally requires that the same entities are swapped in the corresponding parallel sentence. A robust scheme for gender-swapping multiple entities in inflected language sentences directly, together with corresponding parallel text, is beyond the scope of this paper. Instead we suggest a rough but straightforward approach for counterfactual data augmentation for NMT which to the best of our knowledge is the first application to parallel sentences.",
"We first perform simple gender-swapping on the subset of the English source sentences with gendered terms. We use the approach described in BIBREF5 which swaps a fixed list of gendered stopwords (e.g. man / woman, he / she).. We then greedily forward-translate the gender-swapped English sentences with a baseline NMT model trained on the the full source and target text, producing gender-swapped target language sentences.",
"This lets us compare four related sets for gender debiasing adaptation, as illustrated in Figure FIGREF11:",
"Original: a subset of parallel sentences from the original training data where the source sentence contains gendered stopwords.",
"Forward-translated (FTrans) original: the source side of the original set with forward-translated target sentences.",
"Forward-translated (FTrans) swapped: the original source sentences are gender-swapped, then forward-translated to produce gender-swapped target sentences.",
"Balanced: the concatenation of the original and FTrans swapped parallel datasets. This is twice the size of the other counterfactual sets.",
"Comparing performance in adaptation of FTrans swapped and FTrans original lets us distinguish between the effects of gender-swapping and of obtaining target sentences from forward-translation."
],
[
"Fine-tuning a converged neural network on data from a distinct domain typically leads to catastrophic forgetting of the original domain BIBREF11. We wish to adapt to the gender-balanced domain without losing general translation performance. This is a particular problem when fine-tuning on the very small and distinct handcrafted adaptation sets."
],
[
"Regularized training is a well-established approach for minimizing catastrophic forgetting during domain adaptation of machine translation BIBREF24. One effective form is Elastic Weight Consolidation (EWC) BIBREF25 which in NMT has been shown to maintain or even improve original domain performance BIBREF26, BIBREF27. In EWC a regularization term is added to the original loss function $L$ when training the debiased model (DB):",
"$\\theta ^{B}_{j}$ are the converged parameters of the original biased model, and $\\theta ^{DB}_j$ are the current debiased model parameters. $F_j=\\mathbb {E} \\big [ \\nabla ^2 L(\\theta ^{B}_j)\\big ] $, a Fisher information estimate over samples from the biased data under the biased model. We apply EWC when performance on the original validation set drops, selecting hyperparameter $\\lambda $ via validation set BLEU."
],
[
"An alternative approach for avoiding catastrophic forgetting takes inspiration from lattice rescoring for NMT BIBREF28 and Grammatical Error Correction BIBREF29. We assume we have two NMT models. With one we decode fluent translations which contain gender bias ($B$). For the one-best hypothesis we would translate:",
"The other model has undergone debiasing ($DB$) at a cost to translation performance, producing:",
"We construct a flower transducer $T$ that maps each word in the target language's vocabulary to itself, as well as to other forms of the same word with different gender inflections (Figure FIGREF21). We also construct $Y_B$, a lattice with one path representing the biased but fluent hypothesis $\\mathbf {y_B}$ (Figure FIGREF21).",
"The acceptor ${\\mathcal {P}}(\\mathbf {y_B}) = \\text{proj}_\\text{output} (Y_B \\circ T )$ defines a language consisting of all the gender-inflected versions of the biased first-pass translation $\\mathbf {y_B}$ that are allowed by $T$ (Figure FIGREF21). We can now decode with lattice rescoring ($LR$) by constraining inference to ${\\mathcal {P}}({\\mathbf {y_B}})$:",
"In practice we use beam search to decode the various hypotheses, and construct $T$ using heuristics on large vocabulary lists for each target language."
],
[
"WinoMT provides an evaluation framework for translation from English to eight diverse languages. We select three pairs for experiments: English to German (en-de), English to Spanish (en-es) and English to Hebrew (en-he). Our selection covers three language groups with varying linguistic properties: Germanic, Romance and Semitic. Training data available for each language pair also varies in quantity and quality. We filter training data based on parallel sentence lengths and length ratios.",
"For en-de, we use 17.6M sentence pairs from WMT19 news task datasets BIBREF30. We validate on newstest17 and test on newstest18.",
"For en-es we use 10M sentence pairs from the United Nations Parallel Corpus BIBREF31. While still a large set, the UNCorpus exhibits far less diversity than the en-de training data. We validate on newstest12 and test on newstest13.",
"For en-he we use 185K sentence pairs from the multilingual TED talks corpus BIBREF32. This is both a specialized domain and a much smaller training set. We validate on the IWSLT 2012 test set and test on IWSLT 2014.",
"Table TABREF29 summarises the sizes of datasets used, including their proportion of gendered sentences and ratio of sentences in the English source data containing male and female stopwords. A gendered sentence contains at least one English gendered stopword as used by BIBREF5.",
"Interestingly all three datasets have about the same proportion of gendered sentences: 11-12% of the overall set. While en-es appears to have a much more balanced gender ratio than the other pairs, examining the data shows this stems largely from sections of the UNCorpus containing phrases like `empower women' and `violence against women', rather than gender-balanced professional entities.",
"For en-de and en-es we learn joint 32K BPE vocabularies on the training data BIBREF33. For en-he we use separate source and target vocabularies. The Hebrew vocabulary is a 2k-merge BPE vocabulary, following the recommendations of BIBREF34 for smaller vocabularies when translating into lower-resource languages. For the en-he source vocabulary we experimented both with learning a new 32K vocabulary and with reusing the joint BPE vocabulary trained on the largest set – en-de – which lets us initialize the en-he system with the pre-trained en-de model. The latter resulted in higher BLEU and faster training."
],
[
"For all models we use a Transformer model BIBREF35 with the `base' parameter settings given in Tensor2Tensor BIBREF36. We train baselines to validation set BLEU convergence on one GPU, delaying gradient updates by factor 4 to simulate 4 GPUs BIBREF37. During fine-tuning training is continued without learning rate resetting. Normal and lattice-constrained decoding is via SGNMT with beam size 4. BLEU scores are calculated for cased, detokenized output using SacreBLEU BIBREF38"
],
[
"For lattice rescoring we require a transducer $T$ containing gender-inflected forms of words in the target vocabulary. To obtain the vocabulary for German we use all unique words in the full target training dataset. For Spanish and Hebrew, which have smaller and less diverse training sets, we use 2018 OpenSubtitles word lists. We then use DEMorphy BIBREF39 for German, spaCy BIBREF40 for Spanish and the small set of gendered suffixes for Hebrew BIBREF41 to approximately lemmatize each vocabulary word and generate its alternately-gendered forms. While there are almost certainly paths in $T$ containing non-words, we expect these to have low likelihood under the debiasing models. For lattice compositions we use the efficient OpenFST implementations BIBREF42."
],
[
"In Table TABREF36 we compare our three baselines to commercial systems on WinoMT, using results quoted directly from BIBREF0. Our baselines achieve comparable accuracy, masculine/feminine bias score $\\Delta G$ and pro/anti stereotypical bias score $\\Delta S$ to four commercial translation systems, outscoring at least one system for each metric on each language pair.",
"The $\\Delta S$ for our en-es baseline is surprisingly small. Investigation shows this model predicts male and female entities in a ratio of over 6:1. Since almost all entities are translated as male, pro- and anti-stereotypical class accuracy are both about 50%, making $\\Delta S$ very small. This highlights the importance of considering $\\Delta S$ in the context of $\\Delta G$ and M:F prediction ratio."
],
[
"Table TABREF37 compares our baseline model with the results of unregularised fine-tuning on the counterfactual sets described in Section SECREF10.",
"Fine-tuning for one epoch on original, a subset of the original data with gendered English stopwords, gives slight improvement in WinoMT accuracy and $\\Delta G$ for all language pairs, while $\\Delta S$ worsens. We suggest this set consolidates examples present in the full dataset, improving performance on gendered entities generally but emphasizing stereotypical roles.",
"On the FTrans original set $\\Delta G$ increases sharply relative to the original set, while $\\Delta S$ decreases. We suspect this set suffers from bias amplification BIBREF3 introduced by the baseline system during forward-translation. The model therefore over-predicts male entities even more heavily than we would expect given the gender makeup of the adaptation data's source side. Over-predicting male entities lowers $\\Delta S$ artificially.",
"Adapting to FTrans swapped increases accuracy and decreases both $\\Delta G$ and $\\Delta S$ relative to the baseline for en-de and en-es. This is the desired result, but not a particularly strong one, and it is not replicated for en-he. The balanced set has a very similar effect to the FTrans swapped set, with a smaller test BLEU difference from the baseline.",
"One consistent result from Table TABREF37 is the largest improvement in WinoMT accuracy corresponding to the model predicting male and female entities in the closest ratio. However, the best ratios for models adapted to these datasets are 2:1 or higher, and the accuracy improvement is small.",
"The purpose of EWC regularization is to avoid catastrophic forgetting of general translation ability. This does not occur in the counterfactual experiments, so we do not apply EWC. Moreover, WinoMT accuracy gains are small with standard fine-tuning, which allows maximum adaptation: we suspect EWC would prevent any improvements.",
"Overall, improvements from fine-tuning on counterfactual datasets (FTrans swapped and balanced) are present. However, they are not very different from the improvements when fine-tuning on equivalent non-counterfactual sets (original and FTrans original). Improvements are also inconsistent across language pairs."
],
[
"Results for fine-tuning on the handcrafted set are given in lines 3-6 of Table TABREF40. These experiments take place in minutes on a single GPU, compared to several hours when fine-tuning on the counterfactual sets and far longer if training from scratch.",
"Fine-tuning on the handcrafted sets gives a much faster BLEU drop than fine-tuning on counterfactual sets. This is unsurprising since the handcrafted sets are domains of new sentences with consistent sentence length and structure. By contrast the counterfactual sets are less repetitive and close to subsets of the original training data, slowing forgetting. We believe the degradation here is limited only by the ease of fitting the small handcrafted sets.",
"Line 4 of Table TABREF40 adapts to the handcrafted set, stopping when validation BLEU degrades by 5% on each language pair. This gives a WinoMT accuracy up to 19 points above the baseline, far more improvement than the best counterfactual result. Difference in gender score $\\Delta G$ improves by at least a factor of 4. Stereotyping score $\\Delta S$ also improves far more than for counterfactual fine-tuning. Unlike the Table TABREF37 results, the improvement is consistent across all WinoMT metrics and all language pairs.",
"The model adapted to no-overlap handcrafted data (line 3) gives a similar drop in BLEU to the model in line 4. This model also gives stronger and more consistent WinoMT improvements over the baseline compared to the balanced counterfactual set, despite the implausibly strict scenario of no English profession vocabulary in common with the challenge set. This demonstrates that the adapted model does not simply memorise vocabulary.",
"The drop in BLEU and improvement on WinoMT can be explored by varying the training procedure. The model of line 5 simply adapts to handcrafted data for more iterations with no regularisation, to approximate loss convergence on the handcrafted set. This leads to a severe drop in BLEU, but even higher WinoMT scores.",
"In line 6 we regularise adaptation with EWC. There is a trade-off between general translation performance and WinoMT accuracy. With EWC regularization tuned to balance validation BLEU and WinoMT accuracy, the decrease is limited to about 0.5 BLEU on each language pair. Adapting to convergence, as in line 5, would lead to further WinoMT gains at the expense of BLEU."
],
[
"In lines 7-9 of Table TABREF40 we consider lattice-rescoring the baseline output, using three models debiased on the handcrafted data.",
"Line 7 rescores the general test set hypotheses (line 1) with a model adapted to handcrafted data that has no source language profession vocabulary overlap with the test set (line 3). This scheme shows no BLEU degradation from the baseline on any language and in fact a slight improvement on en-he. Accuracy improvements on WinoMT are only slightly lower than for decoding with the rescoring model directly, as in line 3.",
"In line 8, lattice rescoring with the non-converged model adapted to handcrafted data (line 4) likewise leaves general BLEU unchanged or slightly improved. When lattice rescoring the WinoMT challenge set, 79%, 76% and 49% of the accuracy improvement is maintained on en-de, en-es and en-he respectively. This corresponds to accuracy gains of up to 30% relative to the baselines with no general translation performance loss.",
"In line 9, lattice-rescoring with the converged model of line 5 limits BLEU degradation to 0.2 BLEU on all languages, while maintaining 85%, 82% and 58% of the WinoMT accuracy improvement from the converged model for the three language pairs. Lattice rescoring with this model gives accuracy improvements over the baseline of 36%, 38% and 24% for en-de, en-es and en-he.",
"Rescoring en-he maintains a much smaller proportion of WinoMT accuracy improvement than en-de and en-es. We believe this is because the en-he baseline is particularly weak, due to a small and non-diverse training set. The baseline must produce some inflection of the correct entity before lattice rescoring can have an effect on gender bias."
],
[
"Finally, in Table TABREF41, we apply the gender inflection transducer to the commercial system translations listed in Table TABREF36. We find rescoring these lattices with our strongest debiasing model (line 5 of Table TABREF40) substantially improves WinoMT accuracy for all systems and language pairs.",
"One interesting observation is that WinoMT accuracy after rescoring tends to fall in a fairly narrow range for each language relative to the performance range of the baseline systems. For example, a 25.5% range in baseline en-de accuracy becomes a 3.6% range after rescoring. This suggests that our rescoring approach is not limited as much by the bias level of the baseline system as by the gender-inflection transducer and the models used in rescoring. Indeed, we emphasise that the large improvements reported in Table TABREF41 do not require any knowledge of the commercial systems or the data they were trained on; we use only the translation hypotheses they produce and our own rescoring model and transducer."
],
[
"We treat the presence of gender bias in NMT systems as a domain adaptation problem. We demonstrate strong improvements under the WinoMT challenge set by adapting to tiny, handcrafted gender-balanced datasets for three language pairs.",
"While naive domain adaptation leads to catastrophic forgetting, we further demonstrate two approaches to limit this: EWC and a lattice rescoring approach. Both allow debiasing while maintaining general translation performance. Lattice rescoring, although a two-step procedure, allows far more debiasing and potentially no degradation, without requiring access to the original model.",
"We suggest small-domain adaptation as a more effective and efficient approach to debiasing machine translation than counterfactual data augmentation. We do not claim to fix the bias problem in NMT, but demonstrate that bias can be reduced without degradation in overall translation quality."
],
[
"This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service funded by EPSRC Tier-2 capital grant EP/P020259/1."
]
],
"section_name": [
"Introduction",
"Introduction ::: Related work",
"Gender bias in machine translation",
"Gender bias in machine translation ::: WinoMT challenge set and metrics",
"Gender bias in machine translation ::: Gender debiased datasets ::: Handcrafted profession dataset",
"Gender bias in machine translation ::: Gender debiased datasets ::: Counterfactual datasets",
"Gender bias in machine translation ::: Debiasing while maintaining general translation performance",
"Gender bias in machine translation ::: Debiasing while maintaining general translation performance ::: Regularized training",
"Gender bias in machine translation ::: Debiasing while maintaining general translation performance ::: Gender-inflected search spaces for rescoring with debiased models",
"Experiments ::: Languages and data",
"Experiments ::: Training and inference",
"Experiments ::: Lattice rescoring with debiased models",
"Experiments ::: Results ::: Baseline analysis",
"Experiments ::: Results ::: Counterfactual adaptation",
"Experiments ::: Results ::: Handcrafted profession set adaptation",
"Experiments ::: Results ::: Lattice rescoring with debiased models",
"Experiments ::: Results ::: Reducing gender bias in `black box' commercial systems",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"5fe713216207f3133e103e80a53138a33aa9b708",
"ed7800af8bddbdf88b9aa8807ad9f1fa1100c1e2"
],
"answer": [
{
"evidence": [
"We further show that regularised adaptation with EWC can reduce bias while limiting degradation in general translation quality. We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model. We believe this approach, rescoring with models targeted to remove bias, is novel in NMT. The rescoring procedure improves WinoMT accuracy by up to 30% with no decrease in BLEU on the general test set."
],
"extractive_spans": [],
"free_form_answer": "By transducing initial hypotheses produced by the biased baseline system to create gender-inflected search spaces which can\nbe rescored by the adapted model",
"highlighted_evidence": [
"We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We further show that regularised adaptation with EWC can reduce bias while limiting degradation in general translation quality. We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model. We believe this approach, rescoring with models targeted to remove bias, is novel in NMT. The rescoring procedure improves WinoMT accuracy by up to 30% with no decrease in BLEU on the general test set."
],
"extractive_spans": [
"initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0bfb9111f4fd3aa2d0305e23ec4c2139abaa105f",
"77dbc92e06cd0251a138b68417e8a6b9dd6800fa"
],
"answer": [
{
"evidence": [
"WinoMT provides an evaluation framework for translation from English to eight diverse languages. We select three pairs for experiments: English to German (en-de), English to Spanish (en-es) and English to Hebrew (en-he). Our selection covers three language groups with varying linguistic properties: Germanic, Romance and Semitic. Training data available for each language pair also varies in quantity and quality. We filter training data based on parallel sentence lengths and length ratios."
],
"extractive_spans": [
"German",
"Spanish",
"Hebrew"
],
"free_form_answer": "",
"highlighted_evidence": [
"We select three pairs for experiments: English to German (en-de), English to Spanish (en-es) and English to Hebrew (en-he)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"WinoMT provides an evaluation framework for translation from English to eight diverse languages. We select three pairs for experiments: English to German (en-de), English to Spanish (en-es) and English to Hebrew (en-he). Our selection covers three language groups with varying linguistic properties: Germanic, Romance and Semitic. Training data available for each language pair also varies in quantity and quality. We filter training data based on parallel sentence lengths and length ratios."
],
"extractive_spans": [
"German",
"Spanish",
"Hebrew"
],
"free_form_answer": "",
"highlighted_evidence": [
"We select three pairs for experiments: English to German (en-de), English to Spanish (en-es) and English to Hebrew (en-he)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1df7ce0d274bb5c25ce465c4f7a7172db12eb169",
"6e33df6dc75d73d0a9d6d918ed967721a9b7dfc4"
],
"answer": [
{
"evidence": [
"WinoMT evaluation extracts the grammatical gender of the primary entity from each translation hypothesis by automatic word alignment followed by morphological analysis. WinoMT then compares the translated primary entity with the gold gender, with the objective being a correctly gendered translation. The authors emphasise the following metrics over the challenge set:",
"Accuracy – percentage of hypotheses with the correctly gendered primary entity.",
"$\\mathbf {\\Delta G}$ – difference in $F_1$ score between the set of sentences with masculine entities and the set with feminine entities.",
"$\\mathbf {\\Delta S}$ – difference in accuracy between the set of sentences with pro-stereotypical (`pro') entities and those with anti-stereotypical (`anti') entities, as determined by BIBREF5 using US labour statistics. For example, the `pro' set contains male doctors and female nurses, while `anti' contains female doctors and male nurses.",
"We note that $\\Delta S$ can be significantly skewed by very biased systems. A model that generates male forms for almost all test sentences, stereotypical roles or not, will have an extremely low $\\Delta S$, since its pro- and anti-stereotypical class accuracy will both be about 50%. Consequently we also report:",
"M:F – ratio of hypotheses with male predictions to those with female predictions.",
"Finally, we wish to reduce gender bias without reducing translation performance. We report BLEU BIBREF22 on separate, general test sets for each language pair. WinoMT is designed to work without target language references, and so it is not possible to measure translation performance on this set by measures such as BLEU."
],
"extractive_spans": [
"Accuracy",
"$\\mathbf {\\Delta G}$",
"$\\mathbf {\\Delta S}$",
"BLEU"
],
"free_form_answer": "",
"highlighted_evidence": [
"The authors emphasise the following metrics over the challenge set:\n\nAccuracy – percentage of hypotheses with the correctly gendered primary entity.\n\n$\\mathbf {\\Delta G}$ – difference in $F_1$ score between the set of sentences with masculine entities and the set with feminine entities.\n\n$\\mathbf {\\Delta S}$ – difference in accuracy between the set of sentences with pro-stereotypical (`pro') entities and those with anti-stereotypical (`anti') entities, as determined by BIBREF5 using US labour statistics. For example, the `pro' set contains male doctors and female nurses, while `anti' contains female doctors and male nurses.",
"Consequently we also report:\n\nM:F – ratio of hypotheses with male predictions to those with female predictions.",
"We report BLEU BIBREF22 on separate, general test sets for each language pair."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"WinoMT evaluation extracts the grammatical gender of the primary entity from each translation hypothesis by automatic word alignment followed by morphological analysis. WinoMT then compares the translated primary entity with the gold gender, with the objective being a correctly gendered translation. The authors emphasise the following metrics over the challenge set:",
"$\\mathbf {\\Delta G}$ – difference in $F_1$ score between the set of sentences with masculine entities and the set with feminine entities.",
"$\\mathbf {\\Delta S}$ – difference in accuracy between the set of sentences with pro-stereotypical (`pro') entities and those with anti-stereotypical (`anti') entities, as determined by BIBREF5 using US labour statistics. For example, the `pro' set contains male doctors and female nurses, while `anti' contains female doctors and male nurses."
],
"extractive_spans": [
"$\\mathbf {\\Delta G}$ – difference in $F_1$ score between the set of sentences with masculine entities and the set with feminine entities",
"$\\mathbf {\\Delta S}$ – difference in accuracy between the set of sentences with pro-stereotypical (`pro') entities and those with anti-stereotypical (`anti') entities"
],
"free_form_answer": "",
"highlighted_evidence": [
"The authors emphasise the following metrics over the challenge set:",
"$\\mathbf {\\Delta G}$ – difference in $F_1$ score between the set of sentences with masculine entities and the set with feminine entities.",
"$\\mathbf {\\Delta S}$ – difference in accuracy between the set of sentences with pro-stereotypical (`pro') entities and those with anti-stereotypical (`anti') entities, as determined by BIBREF5 using US labour statistics. For example, the `pro' set contains male doctors and female nurses, while `anti' contains female doctors and male nurses."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"66a1da720fd8bc792f10a671d88c0514baef0e7e",
"8ea58badc09526c62028205bf4f437a9c611fad8"
],
"answer": [
{
"evidence": [
"Regarding data, we suggest that a small, trusted gender-balanced set could allow more efficient and effective gender debiasing than a larger, noisier set. To explore this we create a tiny, handcrafted profession-based dataset for transfer learning. For contrast, we also consider fine-tuning on a counterfactual subset of the full dataset and propose a straightforward scheme for artificially gender-balancing parallel text for NMT."
],
"extractive_spans": [
" create a tiny, handcrafted profession-based dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"To explore this we create a tiny, handcrafted profession-based dataset for transfer learning."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We refer to this as the handcrafted set. Each profession is from the list collected by BIBREF4 from US labour statistics. We simplify this list by removing field-specific adjectives. For example, we have a single profession `engineer', as opposed to specifying industrial engineer, locomotive engineer, etc. In total we select 194 professions, giving just 388 sentences in a gender-balanced set.",
"With manually translated masculine and feminine templates, we simply translate the masculine and feminine forms of each listed profession for each target language. In practice this translation is via an MT first-pass for speed, followed by manual checking, but given available lexicons this could be further automated. We note that the handcrafted sets contain no examples of coreference resolution and very little variety in terms of grammatical gender. A set of more complex sentences targeted at the coreference task might further improve WinoMT scores, but would be more difficult to produce for new languages."
],
"extractive_spans": [],
"free_form_answer": "They select professions from the list collected by BIBREF4 from US labour statistics and manually translate masculine and feminine examples",
"highlighted_evidence": [
"Each profession is from the list collected by BIBREF4 from US labour statistics. We simplify this list by removing field-specific adjectives.",
"With manually translated masculine and feminine templates, we simply translate the masculine and feminine forms of each listed profession for each target language."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How does lattice rescoring improve inference?",
"What three languages are used in the translation experiments?",
"What metrics are used to measure bias reduction?",
"How is the set of trusted, gender-balanced examples selected?"
],
"question_id": [
"cf82251a6a5a77e29627560eb7c05c3eddc20825",
"b1fe6a39b474933038b44b6d45e5ca32af7c3e36",
"919681faa9731057b3fae5052b7da598abd3e04b",
"2749fb1725a2c4bdba5848e2fc424a43e7c4be51"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias",
"bias",
"bias"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Generating counterfactual datasets for adaptation. The Original set is 1||2, a simple subset of the full dataset. FTrans original is 1||3, FTrans swapped is 4||5, and Balanced is 1,4||2,5",
"Table 1: Parallel sentence counts. A gendered sentence pair has minimum one gendered stopword on the English side. M:F is ratio of male vs female gendered training sentences.",
"Table 2: WinoMT accuracy, masculine/feminine bias score ∆G and pro/anti stereotypical bias score ∆S for our baselines compared to commercial systems, whose scores are quoted directly from Stanovsky et al. (2019).",
"Table 3: General test set BLEU and WinoMT scores after unregularised fine-tuning the baseline on four genderbased adaptation datasets. Improvements are inconsistent across language pairs.",
"Table 4: General test set BLEU and WinoMT scores after fine-tuning on the handcrafted profession set, compared to fine-tuning on the most consistent counterfactual set. Lines 1-2 duplicated from Table 3. Lines 3-4 vary adaptation data. Lines 5-6 vary adaptation training procedure. Lines 7-9 apply lattice rescoring to baseline hypotheses.",
"Table 5: We generate gender-inflected lattices from commercial system translations for WinoMT, collected by Stanovsky et al. (2019). We then rescore with the debiased model from line 5 of Table 4. This table gives WinoMT scores for the rescored hypotheses, with bracketed baseline scores duplicated from Table 2."
],
"file": [
"4-Figure1-1.png",
"6-Table1-1.png",
"8-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png"
]
} | [
"How does lattice rescoring improve inference?",
"How is the set of trusted, gender-balanced examples selected?"
] | [
[
"2004.04498-Introduction-7"
],
[
"2004.04498-Introduction-4",
"2004.04498-Gender bias in machine translation ::: Gender debiased datasets ::: Handcrafted profession dataset-4",
"2004.04498-Gender bias in machine translation ::: Gender debiased datasets ::: Handcrafted profession dataset-3"
]
] | [
"By transducing initial hypotheses produced by the biased baseline system to create gender-inflected search spaces which can\nbe rescored by the adapted model",
"They select professions from the list collected by BIBREF4 from US labour statistics and manually translate masculine and feminine examples"
] | 293 |
1912.07076 | Multilingual is not enough: BERT for Finnish | Deep learning-based language models pretrained on large unannotated text corpora have been demonstrated to allow efficient transfer learning for natural language processing, with recent approaches such as the transformer-based BERT model advancing the state of the art across a variety of tasks. While most work on these models has focused on high-resource languages, in particular English, a number of recent efforts have introduced multilingual models that can be fine-tuned to address tasks in a large number of different languages. However, we still lack a thorough understanding of the capabilities of these models, in particular for lower-resourced languages. In this paper, we focus on Finnish and thoroughly evaluate the multilingual BERT model on a range of tasks, comparing it with a new Finnish BERT model trained from scratch. The new language-specific model is shown to systematically and clearly outperform the multilingual. While the multilingual model largely fails to reach the performance of previously proposed methods, the custom Finnish BERT model establishes new state-of-the-art results on all corpora for all reference tasks: part-of-speech tagging, named entity recognition, and dependency parsing. We release the model and all related resources created for this study with open licenses at this https URL . | {
"paragraphs": [
[
"Transfer learning approaches using deep neural network architectures have recently achieved substantial advances in a range of natural language processing (NLP) tasks ranging from sequence labeling tasks such as part-of-speech (POS) tagging and named entity recognition (NER) BIBREF0 to dependency parsing BIBREF1 and natural language understanding (NLU) tasks BIBREF2. While the great majority of this work has focused primarily on English, a number of studies have also targeted other languages, typically through multilingual models.",
"The BERT model of devlin2018bert has been particularly influential, establishing state-of-the-art results for English for a range of NLU tasks and NER when it was released. For most languages, the only currently available BERT model is the multilingual model (M-BERT) trained on pooled data from 104 languages. While M-BERT has been shown to have a remarkable ability to generalize across languages BIBREF3, several studies have also demonstrated that monolingual BERT models, where available, can notably outperform M-BERT. Such results include the evaluation of the recently released French BERT model BIBREF4, the preliminary results accompanying the release of a German BERT model, and the evaluation of ronnqvist-etal-2019-multilingual comparing M-BERT with English and German monolingual models.",
"In this paper, we study the application of language-specific and multilingual BERT models to Finnish NLP. We introduce a new Finnish BERT model trained from scratch and perform a comprehensive evaluation comparing its performance to M-BERT on established datasets for POS tagging, NER, and dependency parsing as well as a range of diagnostic text classification tasks. The results show that 1) on most tasks the multilingual model does not represent an advance over previous state of the art, indicating that multilingual models may fail to deliver on the promise of deep transfer learning for lower-resourced languages, and 2) the custom Finnish BERT model systematically outperforms the multilingual as well as all previously proposed methods on all benchmark tasks, showing that language-specific deep transfer learning models can provide comparable advances to those reported for much higher-resourced languages."
],
[
"The current transfer learning methods have evolved from word embedding techniques, such as word2vec BIBREF5, GLoVe BIBREF6 and fastText BIBREF7, to take into account the textual context of words. Crucially, incorporating the context avoids the obvious limitations stemming from the one-vector-per-unique-word assumption inherent to the previous word embedding methods. The current successful wave of work proposing and applying different contextualized word embeddings was launched with ELMo BIBREF0, a context embedding method based on bidirectional LSTM networks. Another notable example is the ULMFit model BIBREF8, which specifically focuses on techniques for domain adaptation of LSTM-based language models. Following the introduction of the attention-based (as opposed to recurrent) Transformer architecture BIBREF9, BERT was proposed by BIBREF2, demonstrating superior performance on a broad array of tasks. The BERT model has been further refined in a number of follow-up studies BIBREF10, BIBREF11 and, presently, BERT and related models form the de facto standard approach to embedding text segments as well as individual words in context.",
"Unlike the previous generation of models, training BERT is a computationally intensive task, requiring substantial resources. As of this writing, Google has released English and Chinese monolingual BERT models and the multilingual M-BERT model covering 104 languages. Subsequently, monolingual BERT models have been published for German and French BIBREF4. In a separate line of work, a cross-lingual BERT model for 15 languages was published by BIBREF12, leveraging also cross-lingual signals. Finally, a number of studies have introduced monolingual models focusing on particular subdomains of English, such as BioBERT BIBREF13 and SciBERT BIBREF14 for biomedical publications and scientific text."
],
[
"We next introduce the sources of unlabeled data used to pretrain FinBERT and present the data filtering and cleanup, vocabulary generation, and pretraining processes."
],
[
"To provide a sufficiently large and varied unannotated corpus for pretraining, we compiled Finnish texts from three primary sources: news, online discussion, and an internet crawl. All of the unannotated texts were split into sentences, tokenized, and parsed using the Turku Neural Parser pipeline BIBREF15. Table TABREF4 summarizes the initial statistics of the three sources prior to cleanup and filtering."
],
[
"We combine two major sources of Finnish news: the Yle corpus, an archive of news published by Finland's national public broadcasting company in the years 2011-2018, and The STT corpus of newswire articles sent to media outlets by the Finnish News Agency (STT) between 1992 and 2018. The combined resources contain approx. 900 million tokens, with 20% originating from the Yle corpus and 80% from STT."
],
[
"The Suomi24 corpus (version 2017H2) contains all posts to the Suomi24 online discussion website from 2001 to 2017. Suomi24 is one of the largest social networking forums in Finland and covers a broad range of topics and levels of style and formality in language. The corpus is also roughly five times the size of the available news resources."
],
[
"Two primary sources were used to create pretraining data from unrestricted crawls. First, we compiled documents from the dedicated internet crawl of the Finnish internet of luotolahti2015towards run between 2014 and 2016 using the SpiderLing crawler BIBREF16. Second, we selected texts from the Common Crawl project by running a a map-reduce language detection job on the plain text material from Common Crawl. These sources were supplemented with plain text extracted from the Finnish Wikipedia using the mwlib library. Following initial compilation, this text collection was analyzed for using the Onion deduplication tool. Duplicate documents were removed, and remaining documents grouped by their level of duplication."
],
[
"As quality can be more important than quantity for pretraining data BIBREF17, we applied a series of custom cleaning and filtering steps to the raw textual data. Initial cleaning removed header and tag material from newswire documents. In the first filtering step, machine translated and generated texts were removed using a simple support vector machine (SVM) classifier with lexical features trained on data from the FinCORE corpus BIBREF18. The remaining documents were then aggressively filtered using language detection and hand-written heuristics, removing documents that e.g. had too high a ratio of digits, uppercase or non-Finnish alphabetic characters, or had low average sentence length. A delexicalized SVM classifier operating on parse-derived features was then trained on news (positives) and heuristically filtered documents (negatives) and applied to remove documents that were morphosyntactically similar to the latter. Finally, all internet crawl-sourced documents featuring 25% or more duplication were removed from the data. The statistics of the final pretraining data produced in this process are summarized in Table TABREF10. We note that even with this aggressive filtering, this data is roughly 30 times the size of the Finnish Wikipedia included in M-BERT pretraining data."
],
[
"To generate dedicated BERT vocabularies for Finnish, a sample of cleaned and filtered sentences were first tokenized using BERT BasicTokenizer, generating both a cased version where punctuation is separated, and an uncased version where characters are additionally mapped to lowercase and accents stripped. We then used the SentencePiece BIBREF19 implementation of byte-pair-encoding (BPE) BIBREF20 to generate cased and uncased vocabularies of 50,000 word pieces each.",
"To assess the coverage of the generated cased and uncased vocabularies and compare these to previously introduced vocabularies, we sampled a random 1% of tokens extracted using WikiExtractor from the English and Finnish Wikipedias and tokenized the texts using various vocabularies to determine the number of word pieces and unknown pieces per basic token. Table TABREF15 shows the results of this evaluation. For English, both BERT and M-BERT generate less than 1.2 WordPieces per token, meaning that the model will represent the great majority of words as a single piece. For Finnish, this ratio is nearly 2 for M-BERT. While some of this difference is explained by the morphological complexity of the language, it also reflects that only a small part of the M-BERT vocabulary is dedicated to Finnish: using the language-specific FinBERT vocabularies, this ratio remains notably lower even though the size of these vocabularies is only half of the M-BERT vocabularies. Table TABREF16 shows examples of tokenization using the FinBERT and M-BERT vocabularies."
],
[
"We used BERT tools to create pretraining examples using the same masked language model and next sentence prediction tasks used for the original BERT. Separate duplication factors were set for news, discussion and crawl texts to create a roughly balanced number of examples from each source. We also used whole-word masking, where all pieces of a word are masked together rather than selecting masked word pieces independently. We otherwise matched the parameters and process used to create pretraining data for the original BERT, including generating separate examples with sequence lengths 128 and 512 and setting the maximum number of masked tokens per sequence separately for each (20 and 77, respectively)."
],
[
"We pretrained cased and uncased models configured similarly to the base variants of BERT, with 110M parameters for each. The models were trained using 8 Nvidia V100 GPUs across 2 nodes on the Puhti supercomputer of CSC, the Finnish IT Center for Science. Following the approach of devlin2018bert, each model was trained for 1M steps, where the initial 90% used a maximum sequence length of 128 and the last 10% the full 512. A batch size of 140 per GPU was used for primary training, giving a global batch size of 1120. Due to memory constraints, the batch size was dropped to 20 per GPU for training with sequence length 512. We used the LAMB optimizer BIBREF21 with warmup over the first 1% of steps to a peak learning rate of 1e-4 followed by decay. Pretraining took approximately 12 days to complete per model variant."
],
[
"We next present an evaluation of the M-BERT and FinBERT models on a series of Finnish datasets representing both downstream NLP tasks and diagnostic evaluation tasks.",
"Unless stated otherwise, all experiments follow the basic setup used in the experiments of devlin2018bert, selecting the learning rate, batch size and the number of epochs used for fine-tuning separately for each model and dataset combination using a grid search with evaluation on the development data. Other model and optimizer parameters were kept at the BERT defaults. Excepting for the parsing experiments, we repeat each experiment 5-10 times and report result mean and standard deviation."
],
[
"Part of speech tagging is a standard sequence labeling task and several Finnish resources are available for the task."
],
[
"To assess POS tagging performance, we use the POS annotations of the three Finnish treebanks included in the Universal Dependencies (UD) collection BIBREF24: the Turku Dependency Treebank (TDT) BIBREF25, FinnTreeBank (FTB) BIBREF26 and Parallel UD treebank (PUD) BIBREF27. A broad range of methods were applied to tagging these resources as a subtask in the recent CoNLL shared tasks in 2017 and 2018 BIBREF28, and we use the CoNLL 2018 versions (UD version 2.2) of these corpora to assure comparability with their results. The statistics of these resources are shown in Table TABREF17. As the PUD corpus only provides a test set, we train and select parameters on the training and development sets of the compatibly annotated TDT corpus for evaluation on PUD. The CoNLL shared task proceeds from raw text and thus requires sentence splitting and tokenization in order to assign POS tags. To focus on tagging performance while maintaining comparability, we predict tags for the tokens predicted by the Uppsala system BIBREF29, distributed as part of the CoNLL'18 shared task system outputs BIBREF30."
],
[
"We implement the BERT POS tagger straightforwardly by attaching a time-distributed dense output layer over the top layer of BERT and using the first piece of each wordpiece-tokenized input word to represent the word. The implementation and data processing tools are openly available. We compare POS tagging results to the best-performing methods for each corpus in the CoNLL 2018 shared task, namely that of che2018towards for TDT and FTB and lim2018sex for PUD. We report performance for the UPOS metric as implemented by the official CoNLL 2018 evaluation script."
],
[
"Table TABREF25 summarizes the results for POS tagging. We find that neither M-BERT model improves on the previous state of the art for any of the three resources, with results ranging 0.1-0.8% points below the best previously published results. By contrast, both language-specific models outperform the previous state of the art, with absolute improvements for FinBERT cased ranging between 0.4 and 1.7% points. While these improvements over the already very high reference results are modest in absolute terms, the relative reductions in errors are notable: in particular, the FinBERT cased error rate on FTB is less than half of the best CoNLL'18 result BIBREF22. We also note that the uncased models are surprisingly competitive with their cased equivalents for a task where capitalization has long been an important feature: for example, FinBERT uncased performance is within approx. 0.1% points of FinBERT cased for all corpora."
],
[
"Like POS tagging, named entity recognition is conventionally cast as a sequence labeling task. During the development of FinBERT, only one corpus was available for Finnish NER."
],
[
"FiNER, a manually annotated NER corpus for Finnish, was recently introduced by ruokolainen2019finnish. The corpus annotations cover five types of named entities – person, organization, location, product and event – as well as dates. The primary corpus texts are drawn from a Finnish technology news publication, and it additionally contains an out-of-domain test set of documents drawn from the Finnish Wikipedia. In addition to conventional CoNLL-style named entity annotation, the corpus includes a small number of nested annotations (under 5% of the total). As ruokolainen2019finnish report results also for top-level (non-nested) annotations and the recognition of nested entity mentions would complicate evaluation, we here consider only the top-level annotations of the corpus. Table TABREF26 summarizes the statistics of these annotations."
],
[
"Our NER implementation is based on the approach proposed for CoNLL English NER by devlin2018bert. A dense layer is attached on top of the BERT model to predict IOB tags independently, without a CRF layer. To include document context for each sentence, we simply concatenate as many of the following sentences as can fit in the 512 wordpiece sequence. The FiNER data does not identify document boundaries, and therefore not all these sentences are necessarily from the same document. We make the our implementation available under an open licence.",
"We compare NER results to the rule-based FiNER-tagger BIBREF32 developed together with the FiNER corpus and to the neural network-based model of gungor2018improving targeted specifically toward morphologically rich languages. The former achieved the highest results on the corpus and the latter was the best-performing machine learning-based method in the experiments of ruokolainen2019finnish. Named entity recognition performance is evaluated in terms of exact mention-level precision, recall and F-score as implemented by the standard conlleval script, and F-score is used to compare performance."
],
[
"The results for named entity recognition are summarized in Table TABREF34 for the in-domain (technology news) test set and Table TABREF35 for the out-of-domain (Wikipedia) test set. We find that while M-BERT is able to outperform the best previously published results on the in-domain test set, it fails to reach the performance of FiNER-tagger on the out-of-domain test set. As for POS tagging, the language-specific FinBERT model again outperforms both M-BERT as well as all previously proposed methods, establishing new state-of-the-art results for Finnish named entity recognition."
],
[
"Dependency parsing involves the prediction of a directed labeled graph over tokens. Finnish dependency parsing has a long history and several established resources are available for the task."
],
[
"The CoNLL 2018 shared task addressed end-to-end parsing from raw text into dependency structures on 82 different corpora representing 57 languages BIBREF28. We evaluate the pre-trained BERT models on the dependency parsing task using the three Finnish UD corpora introduced in Section SECREF27: the Turku Dependency Treebank (TDT), FinnTreeBank (FTB) and the Parallel UD treebank (PUD). To allow direct comparison with CoNLL 2018 results, we use the same versions of the corpora as used in the shared task (UD version 2.2) and evaluate performance using the official script provided by the task organizers. These corpora are the same used in the part-of-speech tagging experiments, and their key statistics were summarized above in Table TABREF17."
],
[
"We evaluate the models using the Udify dependency parser recently introduced by BIBREF1. Udify is a multi-task model that support supporting multi- or monolingual fine-tuning of pre-trained BERT models on UD treebanks. Udify implements a multi-task network where a separate prediction layer for each task is added on top of the pre-trained BERT encoder. Additionally, instead of using only the top encoder layer representation in prediction, Udify adds a layers-wise dot-product attention, which calculates a weighted sum of all intermediate representation of 12 BERT layers for each token. All prediction layers as well as layer-wise attention are trained simultaneously, while also fine-tuning the pre-trained BERT weights.",
"We train separate Udify parsing models using monolingual fine-tuning for TDT and FTB. The TDT models are used to evaluate performance also on PUD, which does not include a training set. We report parser performance in terms of Labeled Attachment Score (LAS). Each parser model is fine-tuned for 160 epochs with BERT weights kept frozen during the first epoch and subsequently updated along with other weights. The learning rate scheduler warm-up period is defined to be approximately one epoch. Otherwise, parameters are the same as used in BIBREF1. As the Udify model does not implement sentence or token segmentation, we use UDPipe BIBREF34 to pre-segment the text when reporting LAS on predicted segmentation.",
"We compare our results to the best-performing system in the CoNLL 2018 shared task for the LAS metric, HIT-SCIR BIBREF22. In addition to having the highest average score over all treebanks for this metric, the system also achieved the highest LAS among 26 participants for each of the three Finnish treebanks. The dependency parser used in the HIT-SCIR system is the biaffine graph-based parser of BIBREF35 with deep contextualized word embeddings (ELMo) BIBREF36 trained monolingually on web crawl and Wikipedia data provided by BIBREF37. The final HIT-SCIR model is an ensemble over three parser models trained with different parameter initializations, where the final prediction is calculated by averaging the softmaxed output scores.",
"We also compare results to the recent work of BIBREF33, where the merits of two parsing architectures, graph-based BIBREF38 and transition-based BIBREF39, are studied with two different deep contextualized embeddings, ELMo and BERT. We include results for their best-performing combination on the Finnish TDT corpus, the transition-based parser with monolingual ELMo embeddings."
],
[
"Table TABREF41 shows LAS results for predicted and gold segmentation. While Udify initialized with M-BERT fails to outperform our strongest baseline BIBREF22, Udify initialized with FinBERT achieves notably higher performance on all three treebanks, establishing new state-of-the-art parsing results for Finnish with a large margin. Depending on the treebank, Udify with cased FinBERT LAS results are 2.3–3.6% points above the previous state of the art, decreasing errors by 24%–31% relatively.",
"Casing seem to have only a moderate impact in parsing, as the performance of cased and uncased models falls within 0.1–0.6% point range in each treebank. However, in each case the trend is that with FinBERT the cased version always outperforms the uncased one, while with M-BERT the story is opposite, the uncased always outperforming the cased one.",
"To relate the high LAS of 93.56 achieved with the combination of the Udify parser and our pre-trained FinBERT model to human performance, we refer to the original annotation of the TDT corpus BIBREF40, where individual annotators were measured against the double-annotated and resolved final annotations. The comparison is reported in terms of LAS. Here, one must take into account that the original TDT corpus was annotated in the Stanford Dependencies (SD) annotation scheme BIBREF41, slightly modified to be suitable for the Finnish language, while the work reported in this paper uses the UD version of the corpus. Thus, the reported numbers are not directly comparable, but keeping in mind the similarities of SD and UD annotation schemes, give a ballpark estimate of human performance in the task. BIBREF40 report the average LAS of the five human annotators who participated in the treebank construction as 91.3, with individual LAS scores ranging from 95.9 to 71.8 (or 88.0 ignoring an annotator who only annotated 2% of the treebank and was still in the training phrase). Based on these numbers, the achieved parser LAS of 93.56 seems to be on par with or even above average human level performance and approaching the level of a well-trained and skilled annotator."
],
[
"Finnish lacks the annotated language resources to construct a comprehensive collection of classification tasks such as those available for English BIBREF42, BIBREF43, BIBREF44. To assess model performance at text classification, we create two datasets based on Finnish document collections with topic information, one representing formal language (news) and the other informal (online discussion)."
],
[
"Documents in the Yle news corpus (Section SECREF3) are annotated using a controlled vocabulary to identify subjects such as sports, politics, and economy. We identified ten such upper-level topics that were largely non-overlapping in the data and sampled documents annotated with exactly one selected topic to create a ten-class classification dataset. As the Yle corpus is available for download under a license that does not allow redistribution, we release tools to recreate this dataset. The Ylilauta corpus consists of the text of discussions on the Finnish online discussion forum Ylilauta from 2012 to 2014. Each posted message belongs to exactly one board, with topics such as games, fashion and television. We identified the ten most frequent topics and sampled messages consisting of at least ten tokens to create a text classification dataset from the Ylilauta data.",
"To facilitate analysis and comparison, we downsample both corpora to create balanced datasets with 10000 training examples as well as 1000 development and 1000 test examples of each class. To reflect generalization performance to new documents, both resources were split chronologically, drawing the training set from the oldest texts, the test set from the newest, and the development set from texts published between the two. To assess classifier performance across a range of training dataset sizes, we further downsampled the training sets to create versions with 100, 316, 1000, and 3162 examples of each class ($10^2, 10^{2.5}, \\ldots $). Finally, we truncated each document to a maximum of 256 basic tokens to minimize any advantage the language-specific model might have due to its more compact representation of Finnish."
],
[
"We implement the text classification methods following devlin2018bert, minimizing task-specific architecture and simply attaching a dense output layer to the initial ([CLS]) token of the top layer of BERT. We establish baseline text classification performance using fastText BIBREF7. We evaluated a range of parameter combinations and different pretrained word vectors for the method using the development data, selecting character n-gram features of lengths 3–7, training for 25 epochs, and initialization with subword-enriched embeddings induced from Wikipedia texts BIBREF45 for the final experiments."
],
[
"The text classification results for various training set sizes are shown in Table TABREF45 for Yle news and in Table TABREF46 for Ylilauta online discussion and illustrated in Figure FIGREF47. We first note that performance is notably higher for the news corpus, with error rates for a given method and data set size more than doubling when moving from news to the discussion corpus. As both datasets represent 10-class classification tasks with balanced classes, this suggests that the latter task is inherently more difficult, perhaps in part due to the incidence of spam and off-topic messages on online discussion boards.",
"The cased and uncased variants of FinBERT perform very similarly for both datasets and all training set sizes, while for M-BERT the uncased model consistently outperforms the cased – as was also found for parsing – with a marked advantage for small dataset sizes.",
"Comparing M-BERT and FinBERT, we find that the language-specific models outperform the multilingual models across the full range of training data sizes for both datasets. For news, the four BERT variants have broadly similar learning curves, with the absolute advantage for FinBERT models ranging from 3% points for 1K examples to just over 1% point for 100K examples, and relative reductions in error from 20% to 13%. For online discussion, the differences are much more pronounced, with M-BERT models performing closer to the FastText baseline than to FinBERT. Here the language-specific BERT outperforms the multilingual by over 20% points for the smallest training data and maintains a 5% point absolute advantage even with 100,000 training examples, halving the error rate of the multilingual model for the smallest training set and maintaining an over 20% relative reduction for the largest.",
"These contrasting results for the news and discussion corpora may be explained in part by domain mismatch: while the news texts are written in formal Finnish resembling the Wikipedia texts included as pretraining data for all BERT models as well as the FastText word vectors, only FinBERT pretraining material included informal Finnish from online discussions. This suggests that in pretraining BERT models care should be taken to assure that not only the targeted language but also the targeted text domains are sufficiently represented in the data."
],
[
"Finally, we explored the ability of the models to capture linguistic properties using the probing tasks proposed by BIBREF46. We use the implementation and Finnish data introduced for these tasks by BIBREF47, which omit the TopConst task defined in the original paper. We also left out the Semantic odd-man-out (SOMO) task, as we found the data to have errors making the task impossible to perform correctly. All of the tasks involve freezing the BERT layers and training a dense layer on top of it to function as a diagnostic classifier. The only information passed from BERT to the classifier is the state represented by the [CLS] token.",
"In brief, the tasks can be roughly categorized into 3 different groups: surface, syntactic and semantic information."
],
[
"In the sentence length (SentLen) task, sentences are classified into 6 classes depending on their length. The word content (WC) task measures the model's ability to determine which of 1000 mid-frequency words occurs in a sentence, where only one of the words is present in any one sentence."
],
[
"The tree depth (TreeDepth) task is used to test how well the model can identify the depth of the syntax tree of a sentence. We used dependency trees to maintain comparability with the work of BIBREF47, whereas the original task used constituency trees. Bigram shift (BiShift) tests the model's ability to recognize when two adjacent words have had their positions swapped."
],
[
"In the subject number (SubjNum) task the number of the subject, i.e. singular or plural, connected to the main verb of a sentence is predicted. Object number (ObjNum) is similar to the previous task but for objects of the main verb. The Coordination inversion (CoordInv) has the order of two clauses joined by a coordinating conjunction reversed in half the examples. The model then has to predict whether or not a given example was inverted. In the Tense task the classifier has to predict whether a main verb of a sentence is in the present or past tense."
],
[
"Table TABREF57 presents results comparing the FinBERT models to replicated M-BERT results from BIBREF47. We find that the best performance is achieved by either the cased or uncased language-specific model for all tasks except TreeDepth, where M-BERT reaches the highest performance. The differences between the results for the language-specific and multilingual models are modest for most tasks with the exception of the BiShift task, where the FinBERT models are shown to be markedly better at identifying sentences with inverted words. While this result supports the conclusion of our other experiments that FinBERT is the superior language model, results for the other tasks offer only weak support at best. We leave for future work the question whether these tasks measure aspects where the language-specific model does not have a clear advantage over the multilingual or if the results reflect limitations in the implementation or data of the probing tasks."
],
[
"We have demonstrated that it is possible to create a language-specific BERT model for a lower-resourced language, Finnish, that clearly outperforms the multilingual BERT at a range of tasks and advances the state of the art in many NLP tasks. These findings raise the question whether it would be possible to realize similar advantages for other languages that currently lack dedicated models of this type. It is likely that the feasibility of training high quality deep transfer learning models hinges on the availability of pretraining data.",
"As of this writing, Finnish ranks 24th among the different language editions of Wikipedia by article count, and 25th in Common Crawl by page count. There are thus dozens of languages for which unannotated corpora of broadly comparable size or larger than that used to pretrain FinBERT could be readily assembled from online resources. Given that language-specific BERT models have been shown to outperform multilingual ones also for high-resource languages such as French BIBREF4 – ranked 3rd by Wikipedia article count – it is further likely that the benefits of a language-specific model observed here extend at least to languages with more resources than Finnish. (We are not aware of efforts to establish the minimum amount of unannotated text required to train high-quality models of this type.)",
"The methods we applied to collect and filter texts for training FinBERT have only few language dependencies, such as the use of UD parsing results for filtering. As UD resources are already available for over 70 languages, the specific approach and tools introduced in this work could be readily applied to a large number of languages. To facilitate such efforts, we also make all of the supporting tools developed in this work available under open licenses."
],
[
"In this work, we compiled and carefully filtered a large unannotated corpus of Finnish, trained language-specific FinBERT models, and presented evaluations comparing these to multilingual BERT models at a broad range of natural language processing tasks. The results indicate that the multilingual models fail to deliver on the promises of deep transfer learning for lower-resourced languages, falling behind the performance of previously proposed methods for most tasks. By contrast, the newly introduced FinBERT model was shown not only to outperform multilingual BERT for all downstream tasks, but also to establish new state-of-the art results for three different Finnish corpora for part-of-speech tagging and dependency parsing as well as for named entity recognition.",
"The FinBERT models and all of the tools and resources introduced in this paper are available under open licenses from https://turkunlp.org/finbert."
],
[
"We gratefully acknowledge the support of CSC – IT Center for Science through its Grand Challenge program, the Academy of Finland, the Google Digital News Innovation Fund and collaboration of the Finnish News Agency STT, as well as the NVIDIA Corporation GPU Grant Program."
]
],
"section_name": [
"Introduction",
"Related Work",
"Pretraining",
"Pretraining ::: Pretraining Data",
"Pretraining ::: Pretraining Data ::: News",
"Pretraining ::: Pretraining Data ::: Online discussion",
"Pretraining ::: Pretraining Data ::: Internet crawl",
"Pretraining ::: Pretraining Data ::: Cleanup and filtering",
"Pretraining ::: Vocabulary generation",
"Pretraining ::: Pretraining example generation",
"Pretraining ::: Pretraining process",
"Evaluation",
"Evaluation ::: Part of Speech Tagging",
"Evaluation ::: Part of Speech Tagging ::: Data",
"Evaluation ::: Part of Speech Tagging ::: Methods",
"Evaluation ::: Part of Speech Tagging ::: Results",
"Evaluation ::: Named Entity Recognition",
"Evaluation ::: Named Entity Recognition ::: Data",
"Evaluation ::: Named Entity Recognition ::: Methods",
"Evaluation ::: Named Entity Recognition ::: Results",
"Evaluation ::: Dependency Parsing",
"Evaluation ::: Dependency Parsing ::: Data",
"Evaluation ::: Dependency Parsing ::: Methods",
"Evaluation ::: Dependency Parsing ::: Results",
"Evaluation ::: Text classification",
"Evaluation ::: Text classification ::: Data",
"Evaluation ::: Text classification ::: Methods",
"Evaluation ::: Text classification ::: Results",
"Evaluation ::: Probing Tasks",
"Evaluation ::: Probing Tasks ::: Surface tasks",
"Evaluation ::: Probing Tasks ::: Syntactic tasks",
"Evaluation ::: Probing Tasks ::: Semantic tasks",
"Evaluation ::: Probing Tasks ::: Results",
"Discussion",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1e68a8c136f88b21b23021d1a240203138830ba1",
"726c3501e9d30520b0825620c6d58f9be79dc3cc"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 6: Results for POS tagging (standard deviation in parentheses)",
"FLOAT SELECTED: Table 8: NER results for in-domain test set (standard deviation in parentheses)",
"FLOAT SELECTED: Table 9: NER results for out of domain test set (standard deviation in parentheses)",
"FLOAT SELECTED: Table 10: Labeled attachment score (LAS) parsing results for for predicted (p.seg) and gold (g.seg) segmentation. *Best performing combination in the TDT treebank (ELMo + transition-based parser)."
],
"extractive_spans": [],
"free_form_answer": "For POS, improvements for cased BERT are 1.26 2.52 0.5 for TDT, FTB and PUD datasets respectively.\nFor NER in-domain test set, improvement is 2.11 F1 and for NER out-of-domain test set, improvement is 5.32 F1.\nFor Dependency parsing, improvements are in range from 3.35 to 6.64 LAS for cased BERT.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Results for POS tagging (standard deviation in parentheses)",
"FLOAT SELECTED: Table 8: NER results for in-domain test set (standard deviation in parentheses)",
"FLOAT SELECTED: Table 9: NER results for out of domain test set (standard deviation in parentheses)",
"FLOAT SELECTED: Table 10: Labeled attachment score (LAS) parsing results for for predicted (p.seg) and gold (g.seg) segmentation. *Best performing combination in the TDT treebank (ELMo + transition-based parser)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF25 summarizes the results for POS tagging. We find that neither M-BERT model improves on the previous state of the art for any of the three resources, with results ranging 0.1-0.8% points below the best previously published results. By contrast, both language-specific models outperform the previous state of the art, with absolute improvements for FinBERT cased ranging between 0.4 and 1.7% points. While these improvements over the already very high reference results are modest in absolute terms, the relative reductions in errors are notable: in particular, the FinBERT cased error rate on FTB is less than half of the best CoNLL'18 result BIBREF22. We also note that the uncased models are surprisingly competitive with their cased equivalents for a task where capitalization has long been an important feature: for example, FinBERT uncased performance is within approx. 0.1% points of FinBERT cased for all corpora.",
"Table TABREF41 shows LAS results for predicted and gold segmentation. While Udify initialized with M-BERT fails to outperform our strongest baseline BIBREF22, Udify initialized with FinBERT achieves notably higher performance on all three treebanks, establishing new state-of-the-art parsing results for Finnish with a large margin. Depending on the treebank, Udify with cased FinBERT LAS results are 2.3–3.6% points above the previous state of the art, decreasing errors by 24%–31% relatively.",
"Comparing M-BERT and FinBERT, we find that the language-specific models outperform the multilingual models across the full range of training data sizes for both datasets. For news, the four BERT variants have broadly similar learning curves, with the absolute advantage for FinBERT models ranging from 3% points for 1K examples to just over 1% point for 100K examples, and relative reductions in error from 20% to 13%. For online discussion, the differences are much more pronounced, with M-BERT models performing closer to the FastText baseline than to FinBERT. Here the language-specific BERT outperforms the multilingual by over 20% points for the smallest training data and maintains a 5% point absolute advantage even with 100,000 training examples, halving the error rate of the multilingual model for the smallest training set and maintaining an over 20% relative reduction for the largest."
],
"extractive_spans": [
"absolute improvements for FinBERT cased ranging between 0.4 and 1.7% points",
"LAS results are 2.3–3.6% points above the previous state of the art",
"absolute advantage for FinBERT models ranging from 3% points for 1K examples to just over 1% point for 100K examples"
],
"free_form_answer": "",
"highlighted_evidence": [
"By contrast, both language-specific models outperform the previous state of the art, with absolute improvements for FinBERT cased ranging between 0.4 and 1.7% points.",
"While Udify initialized with M-BERT fails to outperform our strongest baseline BIBREF22, Udify initialized with FinBERT achieves notably higher performance on all three treebanks, establishing new state-of-the-art parsing results for Finnish with a large margin. Depending on the treebank, Udify with cased FinBERT LAS results are 2.3–3.6% points above the previous state of the art, decreasing errors by 24%–31% relatively.",
"For news, the four BERT variants have broadly similar learning curves, with the absolute advantage for FinBERT models ranging from 3% points for 1K examples to just over 1% point for 100K examples, and relative reductions in error from 20% to 13%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"88ef11309feeec4ce519345c657fead31306acbe",
"8cc54d2b404fec965b9f18c21ed1773c56cad890"
],
"answer": [
{
"evidence": [
"The current transfer learning methods have evolved from word embedding techniques, such as word2vec BIBREF5, GLoVe BIBREF6 and fastText BIBREF7, to take into account the textual context of words. Crucially, incorporating the context avoids the obvious limitations stemming from the one-vector-per-unique-word assumption inherent to the previous word embedding methods. The current successful wave of work proposing and applying different contextualized word embeddings was launched with ELMo BIBREF0, a context embedding method based on bidirectional LSTM networks. Another notable example is the ULMFit model BIBREF8, which specifically focuses on techniques for domain adaptation of LSTM-based language models. Following the introduction of the attention-based (as opposed to recurrent) Transformer architecture BIBREF9, BERT was proposed by BIBREF2, demonstrating superior performance on a broad array of tasks. The BERT model has been further refined in a number of follow-up studies BIBREF10, BIBREF11 and, presently, BERT and related models form the de facto standard approach to embedding text segments as well as individual words in context."
],
"extractive_spans": [
"ELMo ",
"ULMFit ",
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
".",
"The current successful wave of work proposing and applying different contextualized word embeddings was launched with ELMo BIBREF0, a context embedding method based on bidirectional LSTM networks. Another notable example is the ULMFit model BIBREF8, which specifically focuses on techniques for domain adaptation of LSTM-based language models. Following the introduction of the attention-based (as opposed to recurrent) Transformer architecture BIBREF9, BERT was proposed by BIBREF2, demonstrating superior performance on a broad array of tasks. The BERT model has been further refined in a number of follow-up studies BIBREF10, BIBREF11 and, presently, BERT and related models form the de facto standard approach to embedding text segments as well as individual words in context."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We implement the BERT POS tagger straightforwardly by attaching a time-distributed dense output layer over the top layer of BERT and using the first piece of each wordpiece-tokenized input word to represent the word. The implementation and data processing tools are openly available. We compare POS tagging results to the best-performing methods for each corpus in the CoNLL 2018 shared task, namely that of che2018towards for TDT and FTB and lim2018sex for PUD. We report performance for the UPOS metric as implemented by the official CoNLL 2018 evaluation script.",
"We compare NER results to the rule-based FiNER-tagger BIBREF32 developed together with the FiNER corpus and to the neural network-based model of gungor2018improving targeted specifically toward morphologically rich languages. The former achieved the highest results on the corpus and the latter was the best-performing machine learning-based method in the experiments of ruokolainen2019finnish. Named entity recognition performance is evaluated in terms of exact mention-level precision, recall and F-score as implemented by the standard conlleval script, and F-score is used to compare performance.",
"We compare our results to the best-performing system in the CoNLL 2018 shared task for the LAS metric, HIT-SCIR BIBREF22. In addition to having the highest average score over all treebanks for this metric, the system also achieved the highest LAS among 26 participants for each of the three Finnish treebanks. The dependency parser used in the HIT-SCIR system is the biaffine graph-based parser of BIBREF35 with deep contextualized word embeddings (ELMo) BIBREF36 trained monolingually on web crawl and Wikipedia data provided by BIBREF37. The final HIT-SCIR model is an ensemble over three parser models trained with different parameter initializations, where the final prediction is calculated by averaging the softmaxed output scores.",
"We also compare results to the recent work of BIBREF33, where the merits of two parsing architectures, graph-based BIBREF38 and transition-based BIBREF39, are studied with two different deep contextualized embeddings, ELMo and BERT. We include results for their best-performing combination on the Finnish TDT corpus, the transition-based parser with monolingual ELMo embeddings."
],
"extractive_spans": [
"che2018towards",
"lim2018sex",
"FiNER-tagger BIBREF32",
"gungor2018",
"HIT-SCIR BIBREF22",
"BIBREF33"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare POS tagging results to the best-performing methods for each corpus in the CoNLL 2018 shared task, namely that of che2018towards for TDT and FTB and lim2018sex for PUD.",
"We compare NER results to the rule-based FiNER-tagger BIBREF32 developed together with the FiNER corpus and to the neural network-based model of gungor2018improving targeted specifically toward morphologically rich languages.",
"We compare our results to the best-performing system in the CoNLL 2018 shared task for the LAS metric, HIT-SCIR BIBREF22.",
"We also compare results to the recent work of BIBREF33, where the merits of two parsing architectures, graph-based BIBREF38 and transition-based BIBREF39, are studied with two different deep contextualized embeddings, ELMo and BERT."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0c8e83be0f13748f3026db698264dd15fbc77252",
"5dbd7c32e3821ad7ec7730083ee8381017861f0d"
],
"answer": [
{
"evidence": [
"We combine two major sources of Finnish news: the Yle corpus, an archive of news published by Finland's national public broadcasting company in the years 2011-2018, and The STT corpus of newswire articles sent to media outlets by the Finnish News Agency (STT) between 1992 and 2018. The combined resources contain approx. 900 million tokens, with 20% originating from the Yle corpus and 80% from STT.",
"The Suomi24 corpus (version 2017H2) contains all posts to the Suomi24 online discussion website from 2001 to 2017. Suomi24 is one of the largest social networking forums in Finland and covers a broad range of topics and levels of style and formality in language. The corpus is also roughly five times the size of the available news resources.",
"Two primary sources were used to create pretraining data from unrestricted crawls. First, we compiled documents from the dedicated internet crawl of the Finnish internet of luotolahti2015towards run between 2014 and 2016 using the SpiderLing crawler BIBREF16. Second, we selected texts from the Common Crawl project by running a a map-reduce language detection job on the plain text material from Common Crawl. These sources were supplemented with plain text extracted from the Finnish Wikipedia using the mwlib library. Following initial compilation, this text collection was analyzed for using the Onion deduplication tool. Duplicate documents were removed, and remaining documents grouped by their level of duplication."
],
"extractive_spans": [
"Yle corpus",
"STT corpus",
"Suomi24 corpus (version 2017H2)",
"luotolahti2015towards",
"Common Crawl",
"Finnish Wikipedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"We combine two major sources of Finnish news: the Yle corpus, an archive of news published by Finland's national public broadcasting company in the years 2011-2018, and The STT corpus of newswire articles sent to media outlets by the Finnish News Agency (STT) between 1992 and 2018.",
"The Suomi24 corpus (version 2017H2) contains all posts to the Suomi24 online discussion website from 2001 to 2017.",
"First, we compiled documents from the dedicated internet crawl of the Finnish internet of luotolahti2015towards run between 2014 and 2016 using the SpiderLing crawler BIBREF16. Second, we selected texts from the Common Crawl project by running a a map-reduce language detection job on the plain text material from Common Crawl. These sources were supplemented with plain text extracted from the Finnish Wikipedia using the mwlib library."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To provide a sufficiently large and varied unannotated corpus for pretraining, we compiled Finnish texts from three primary sources: news, online discussion, and an internet crawl. All of the unannotated texts were split into sentences, tokenized, and parsed using the Turku Neural Parser pipeline BIBREF15. Table TABREF4 summarizes the initial statistics of the three sources prior to cleanup and filtering."
],
"extractive_spans": [
"news, online discussion, and an internet crawl"
],
"free_form_answer": "",
"highlighted_evidence": [
"To provide a sufficiently large and varied unannotated corpus for pretraining, we compiled Finnish texts from three primary sources: news, online discussion, and an internet crawl. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"By how much did the new model outperform multilingual BERT?",
"What previous proposed methods did they explore?",
"What was the new Finnish model trained on?"
],
"question_id": [
"654306d26ca1d9e77f4cdbeb92b3802aa9961da1",
"5a7d1ae6796e09299522ebda7bfcfad312d6d128",
"bd191d95806cee4cf80295e9ce1cd227aba100ab"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Pretraining text source statistics. Tokens are counted using BERT basic tokenization.",
"Table 2: Pretraining text statistics after cleanup and filtering",
"Table 3: Vocabulary statistics for tokenizing Wikipedia texts",
"Table 5: Statistics for the Turku Dependency Treebank, FinnTreeBank and Parallel UD treebank corpora",
"Table 6: Results for POS tagging (standard deviation in parentheses)",
"Table 7: FiNER named entity recognition corpus statistics",
"Table 8: NER results for in-domain test set (standard deviation in parentheses)",
"Table 9: NER results for out of domain test set (standard deviation in parentheses)",
"Table 10: Labeled attachment score (LAS) parsing results for for predicted (p.seg) and gold (g.seg) segmentation. *Best performing combination in the TDT treebank (ELMo + transition-based parser).",
"Table 11: Yle news 10-class text classification accuracy for varying training set sizes (percentages, standard deviation in parentheses)",
"Table 12: Ylilauta online discussion 10-class text classification accuracy for varying training set sizes (percentages, standard deviation in parentheses)",
"Figure 1: Text classification accuracy with different training data sizes for Yle news (left) and Ylilauta online discussion (right). (Note log x scales and different y ranges.)",
"Table 13: Probing results (standard deviation in parentheses)."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"5-Table7-1.png",
"6-Table8-1.png",
"6-Table9-1.png",
"7-Table10-1.png",
"9-Table11-1.png",
"9-Table12-1.png",
"9-Figure1-1.png",
"10-Table13-1.png"
]
} | [
"By how much did the new model outperform multilingual BERT?"
] | [
[
"1912.07076-6-Table9-1.png",
"1912.07076-Evaluation ::: Part of Speech Tagging ::: Results-0",
"1912.07076-7-Table10-1.png",
"1912.07076-5-Table6-1.png",
"1912.07076-6-Table8-1.png",
"1912.07076-Evaluation ::: Text classification ::: Results-2",
"1912.07076-Evaluation ::: Dependency Parsing ::: Results-0"
]
] | [
"For POS, improvements for cased BERT are 1.26 2.52 0.5 for TDT, FTB and PUD datasets respectively.\nFor NER in-domain test set, improvement is 2.11 F1 and for NER out-of-domain test set, improvement is 5.32 F1.\nFor Dependency parsing, improvements are in range from 3.35 to 6.64 LAS for cased BERT."
] | 297 |
1611.02378 | A Surrogate-based Generic Classifier for Chinese TV Series Reviews | With the emerging of various online video platforms like Youtube, Youku and LeTV, online TV series' reviews become more and more important both for viewers and producers. Customers rely heavily on these reviews before selecting TV series, while producers use them to improve the quality. As a result, automatically classifying reviews according to different requirements evolves as a popular research topic and is essential in our daily life. In this paper, we focused on reviews of hot TV series in China and successfully trained generic classifiers based on eight predefined categories. The experimental results showed promising performance and effectiveness of its generalization to different TV series. | {
"paragraphs": [
[
"With Web 2.0's development, more and more commercial websites, such as Amazon, Youtube and Youku, encourage users to post product reviews on their platforms BIBREF0 , BIBREF1 . These reviews are helpful for both readers and product manufacturers. For example, for TV or movie producers, online reviews indicates the aspects that viewers like and/or dislike. This information facilitates producers' production process. When producing future films TV series, they can tailer their shows to better accommodate consumers' tastes. For manufacturers, reviews may reveal customers' preference and feedback on product functions, which help manufacturers to improve their products in future development. On the other hand, consumers can evaluate the quality of product or TV series based on online reviews, which help them make final decisions of whether to buy or watch it. However, there are thousands of reviews emerging every day. Given the limited time and attention consumers have, it is impossible for them to allocate equal amount of attention to all the reviews. Moreover, some readers may be only interested in certain aspects of a product or TV series. It's been a waste of time to look at other irrelevant ones. As a result, automatic classification of reviews is essential for the review platforms to provide a better perception of the review contents to the users.",
"Most of the existing review studies focus on product reviews in English. While in this paper, we focus on reviews of hot Chinese movies or TV series, which owns some unique characteristics. First, Table TABREF1 shows Chinese movies' development BIBREF2 in recent years. The growth of box office and viewers is dramatically high in these years, which provides substantial reviewer basis for the movie/TV series review data. Moreover, the State Administration of Radio Film and Television also announced that the size of the movie market in China is at the 2nd place right after the North America market. In BIBREF2 , it also has been predicted that the movie market in China may eventually become the largest movie market in the world within the next 5-10 years. Therefore, it is of great interest to researchers, practitioners and investors to understand the movie market in China.",
"Besides flourishing of movie/TV series, there are differences of aspect focuses between product and TV series reviews. When a reviewer writes a movie/TV series review, he or she not only care about the TV elements like actor/actress, visual effect, dialogues and music, but also related teams consisted of director, screenwriter, producer, etc. However, with product reviews, few reviewers care about the corresponding backstage teams. What they do care and will comment about are only product related issues like drawbacks of the product functions, or which aspect of the merchandise they like or dislike. Moreover, most of recent researchers' work has been focused on English texts due to its simpler grammatical structure and less vocabulary, as compared with Chinese. Therefore, Chinese movie reviews not only provide more content based information, but also raise more technical challenges. With bloom of Chinese movies, automatic classification of Chinese movie reviews is really essential and meaningful.",
"In this paper, we proposed several strategies to make our classifiers generalizable to agnostic TV series. First, TV series roles' and actors/actresses' names are substituted by generic tags like role_i and player_j, where i and j defines their importance in this movie. On top of such kind of words, feature tokens are further manipulated by feature selection techniques like DRC or INLINEFORM0 , in order to make it more generic. We also experimented with different feature sizes with multiple classifiers in order to alleviate overfitting with high dimension features.",
"The remainder of this paper is organized as follows. Section 2 describes some related work. Section 3 states our problem and details our proposed procedure of approaching the problem. In Section 4, experimental results are provided and discussed. Finally, the conclusions are presented in Section 5."
],
[
"Since we are doing supervised learning task with text input, it is related with work of useful techniques like feature selections and supervised classifiers. Besides, there are only public movie review datasets in English right now, which is different from our language requirement. In the following of this section, we will first introduce some existing feature selection techniques and supervised classifiers we applied in our approach. Then we will present some relevant datasets that are normally used in movie review domain."
],
[
"Feature selection, or variable selection is a very common strategy applied in machine learning domain, which tries to select a subset of relevant features from the whole set. There are mainly three purposes behind this. Smaller feature set or features with lower dimension can help researchers to understand or interpret the model they designed more easily. With fewer features, we can also improve the generalization of our model through preventing overfitting, and reduce the whole training time.",
"Document Relevance Correlation(DRC), proposed by W. Fan et al 2005 BIBREF3 , is a useful feature selection technique. The authors apply this approach to profile generation in digital library service and news-monitoring. They compared DRC with other well-known methods like Robertson's Selection Value BIBREF4 , and machine learning based ones like information gain BIBREF5 . Promising experimental results were shown to demonstrate the effectiveness of DRC as a feature selection in text field.",
"Another popular feature selection method is called INLINEFORM0 BIBREF6 , which is a variant of INLINEFORM1 test in statistics that tries to test the independence between two events. While in feature selection domain, the two events can be interpreted as the occurrence of feature variable and a particular class. Then we can rank the feature terms with respect to the INLINEFORM2 value. It has been proved to be very useful in text domain, especially with bag of words feature model which only cares about the appearance of each term."
],
[
"What we need is to classify each review into several generic categories that might be attractive to the readers, so classifier selection is also quite important in our problem. Supervised learning takes labeled training pairs and tries to learn an inferred function, which can be used to predict new samples. In this paper, our selection is based on two kinds of learning, i.e., discriminative and generative learning algorithms. And we choose three typical algorithms to compare. Bayes BIBREF7 , which is the representative of generative learning, will output the class with the highest probability that is generated through the bayes' rule. While for the discriminative classifiers like logistic regression BIBREF8 or Support Vector Machine BIBREF9 , final decisions are based on the classifier's output score, which is compared with some threshold to distinguish between different classes."
],
[
"Dataset is another important factor influencing the performance of our classifiers. Most of the public available movie review data is in English, like the IMDB dataset collected by Pang/Lee 2004 BIBREF10 . Although it covers all kinds of movies in IMDB website, it only has labels related with the sentiment. Its initial goal was for sentiment analysis. Another intact movie review dataset is SNAP BIBREF11 , which consists of reviews from Amazon but only bearing rating scores. However, what we need is the content or aspect tags that are being discussed in each review. In addition, our review text is in Chinese. Therefore, it is necessary for us to build the review dataset by ourselves and label them into generic categories, which is one of as one of the contributions of this paper."
],
[
"Let INLINEFORM0 be a set of Chinese movie reviews with no categorical information. The ultimate task of movie review classification is to label them into different predefined categories as INLINEFORM1 . Starting from scratch, we need to collect such review set INLINEFORM2 from an online review website and then manually label them into generic categories INLINEFORM3 . Based on the collected dataset, we can apply natural language processing techniques to get raw text features and further learn the classifiers. In the following subsections, we will go through and elaborate all the subtasks shown in Figure FIGREF5 ."
],
[
"What we are interested in are the reviews of the hottest or currently broadcasted TV series, so we select one of the most influential movie and TV series sharing websites in China, Douban. For every movie or TV series, you can find a corresponding section in it. For the sake of popularity, we choose “The Journey of Flower”, “Nirvana in Fire” and “Good Time” as parts of our movie review dataset, which are the hottest TV series from summer to fall 2015. Reviews of each episode have been collected for the sake of dataset comprehensiveness.",
"Then we built the crawler written in python with the help of scrapy. Scrapy will create multiple threads to crawl information we need simultaneously, which saves us lots of time. For each episode, it collected both the short description of this episode and all the reviews under this post. The statistics of our TV series review dataset is shown in Table TABREF7 ."
],
[
"Based on the collected reviews, we are ready to build a rough classifier. Before feeding the reviews into a classifier, we applied two common procedures: tokenization and stop words removal for all the reviews. We also applied a common text processing technique to make our reviews more generic. We replaced the roles' and actors/actresses' names in the reviews with some common tokens like role_i, actor_j, where i and j are determined by their importance in this TV series. Therefore, we have the following inference DISPLAYFORM0 ",
" where INLINEFORM0 is a function which map a role's or actor's index into its importance. However, in practice, it is not a trivial task to infer the importance of all actors and actresses. We rely on data onBaidu Encyclopedia, which is the Chinese version of Wikipedia. For each movie or TV series, Baidu Encyclopedia has all the required information, which includes the level of importance for each role and actor in the show. Actor/actress in a leading role will be listed at first, followed by the ones in a supporting role and other players. Thus we can build a crawler to collect such information, and replace the corresponding words in reviews with generic tags.",
"Afterwards, word sequence of each review can be manipulated with tokenization and stop words removal. Each sequence is broken up into a vector of unigram-based tokens using NLPIR BIBREF12 , which is a very powerful tool supporting sentence segmentation in Chinese. Stop words are words that do not contribute to the meaning of the whole sentence and are usually filtered out before following data processing. Since our reviews are collected from online websites which may include lots of forum words, for this particular domain, we include common forum words in addition to the basic Chinese stop words. Shown below are some typical examples in English that are widely used in Chinese forums. INLINEFORM0 ",
" These two processes will help us remove significant amount of noise in the data."
],
[
"With volumes of TV series review data, it's hard for us to define generic categories without looking at them one by one. Therefore, it's necessary to run some unsupervised models to get an overview of what's being talked in the whole corpus. Here we applied Latent Dirichlet Allocation BIBREF13 , BIBREF14 to discover the main topics related to the movies and actors. In a nutshell, the LDA model assumes that there exists a hidden structure consisting of the topics appearing in the whole text corpus. The LDA algorithm uses the co-occurrence of observed words to learn this hidden structure. Mathematically, the model calculates the posterior distribution of the unobserved variables. Given a set of training documents, LDA will return two main outputs. The first is the list of topics represented as a set of words, which presumably contribute to this topic in the form of their weights. The second output is a list of documents with a vector of weight values showing the probability of a document containing a specific topic.",
"Based on the results from LDA, we carefully defined eight generic categories of movie reviews which are most representative in the dataset as shown in Table TABREF11 .",
"The purpose of this research is to classify each review into one of the above 8 categories. In order to build reasonable classifiers, first we need to obtain a labeled dataset. Each of the TV series reviews was labeled by at least two individuals, and only those reviews with the same assigned label were selected in our training and testing data. This approach ensures that reviews with human biases are filtered out. As a result, we have 5000 for each TV series that matches the selection criteria."
],
[
"After the labelled cleaned data has been generated, we are now ready to process the dataset. One problem is that the vocabulary size of our corpus will be quite large. This could result in overfitting with the training data. As the dimension of the feature goes up, the complexity of our model will also increase. Then there will be quite an amount of difference between what we expect to learn and what we will learn from a particular dataset. One common way of dealing with the issue is to do feature selection. Here we applied DRC and INLINEFORM0 mentioned in related work. First let's define a contingency table for each word INLINEFORM1 like in Table TABREF13 . If INLINEFORM2 , it means the appearance of word INLINEFORM3 .",
"Recall that in classical statistics, INLINEFORM0 is a method designed to measure the independence between two variables or events, which in our case is the word INLINEFORM1 and its relevance to the class INLINEFORM2 . Higher INLINEFORM3 value means higher correlations between them. Therefore, based on the definition of INLINEFORM4 in BIBREF6 and the above Table TABREF13 , we can represent the INLINEFORM5 value as below: DISPLAYFORM0 ",
" While for DRC method, it's based on Relevance Correlation Value, whose purpose is to measure the similarity between two distributions, i.e., binary distribution of word INLINEFORM0 's occurrence and documents' relevance to class INLINEFORM1 along all the training data. For a particular word INLINEFORM2 , its occurrence distribution along all the data can be represented as below (assume we have INLINEFORM3 reviews): DISPLAYFORM0 ",
" And we also know each review INLINEFORM0 's relevance with respect to INLINEFORM1 using the manually tagged labels. DISPLAYFORM0 ",
" where 0 means irrelevant and 1 means relevant. Therefore, we can calculate the similarity between these two vectors as DISPLAYFORM0 ",
" where INLINEFORM0 is called the Relevance Correlation Value for word INLINEFORM1 . Because INLINEFORM2 is either 1 or 0, with the notation in the contingency table, RCV can be simplified as DISPLAYFORM0 ",
" Then on top of RCV, they incorporate the probability of the presence of word INLINEFORM0 if we are given that the document is relevant. In this way, our final formula for computing DRC becomes DISPLAYFORM0 ",
"Therefore, we can apply the above two methods to all the word terms in our dataset and choose words with higher INLINEFORM0 or DRC values to reduce the dimension of our input features."
],
[
"Finally, we are going to train classifiers on top of our reduced generic features. As mentioned above, there are two kinds of learning algorithms, i.e., discriminant and generative classifiers. Based on Bayes rule, the optimal classifier is represented as INLINEFORM0 ",
" where INLINEFORM0 is the prior information we know about class INLINEFORM1 .",
"So for generative approach like Bayes, it will try to estimate both INLINEFORM0 and INLINEFORM1 . During testing time, we can just apply the above Bayes rule to predict INLINEFORM2 . Why do we call it naive? Remember that we assume that each feature is conditionally independent with each other. So we have DISPLAYFORM0 ",
" where we made the assumption that there are INLINEFORM0 words being used in our input. If features are binary, for each word INLINEFORM1 we may simply estimate the probability by DISPLAYFORM0 ",
" in which, INLINEFORM0 is a smoothing parameter in case there is no training sample for INLINEFORM1 and INLINEFORM2 outputs the number of a set. With all these probabilities computed, we can make decisions by whether DISPLAYFORM0 ",
"On the other hand, discriminant learning algorithms will estimate INLINEFORM0 directly, or learn some “discriminant” function INLINEFORM1 . Then by comparing INLINEFORM2 with some threshold, we can make the final decision. Here we applied two common classifiers logistic regression and support vector machine to classify movie reviews. Logistic regression squeezes the input feature into some interval between 0 and 1 by the sigmoid function, which can be treated as the probability INLINEFORM3 . DISPLAYFORM0 ",
" The Maximum A Posteriori of logistic regression with Gaussian priors on parameter INLINEFORM0 is defined as below INLINEFORM1 ",
" which is a concave function with respect to INLINEFORM0 , so we can use gradient ascent below to optimize the objective function and get the optimal INLINEFORM1 . DISPLAYFORM0 ",
" where INLINEFORM0 is a positive hyper parameter called learning rate. Then we can just use equation ( EQREF24 ) to distinguish between classes.",
"While for Support Vector Machine(SVM), its initial goal is to learn a hyperplane, which will maximize the margin between the two classes' boundary hyperplanes. Suppose the hyperplane we want to learn is INLINEFORM0 ",
" Then the soft-margin version of SVM is INLINEFORM0 ",
" where INLINEFORM0 is the slack variable representing the error w.r.t. datapoint INLINEFORM1 . If we represent the inequality constraints by hinge loss function DISPLAYFORM0 ",
" What we want to minimize becomes DISPLAYFORM0 ",
" which can be solved easily with a Quadratic Programming solver. With learned INLINEFORM0 and INLINEFORM1 , decision is made by determining whether DISPLAYFORM0 ",
"Based on these classifiers, we may also apply some kernel trick function on input feature to make originally linearly non-separable data to be separable on mapped space, which can further improve our classifier performance. What we've tried in our experiments are the polynomial and rbf kernels."
],
[
"As our final goal is to learn a generic classifier, which is agnostic to TV series but can predict review's category reasonably, we did experiments following our procedures of building the classifier as discussed in section 1."
],
[
"Before defining the categories of the movie reviews, we should first run some topic modeling method. Here we define categories with the help of LDA. With the number of topics being set as eight, we applied LDA on “The Journey of Flower”, which is the hottest TV series in 2015 summer. As we rely on LDA to guide our category definition, we didn't run it on other TV series. The results are shown in Figure FIGREF30 . Note that the input data here haven't been replaced with the generic tag like role_i or actor_j, as we want to know the specifics being talked by reviewers. Here we present it in the form of heat maps. For lines with brighter color, the corresponding topic is discussed more, compared with others on the same height for each review. As the original texts are in Chinese, the output of LDA are represented in Chinese as well.",
"We can see that most of the reviews are focused on discussing the roles and analyzing the plots in the movie, i.e., 6th and 7th topics in Figure FIGREF30 , while quite a few are just following the posts, like the 4th and 5th topic in the figure. Based on the findings, we generate the category definition shown in Table TABREF11 . Then 5000 out of each TV series reviews, with no label bias between readers, are selected to make up our final data set."
],
[
"Based on INLINEFORM0 and DRC discussed in section 3.4, we can sort the importance of each word term. With different feature size, we can train the eight generic classifiers and get their performances on both training and testing set. Here we use SVM as the classifier to compare feature size's influence. Our results suggest that it performs best among the three. The results are shown in Figure FIGREF32 . The red squares represent the training accuracy, while the blue triangles are testing accuracies.",
"As shown in Figure FIGREF32 , it is easy for us to determine the feature size for each classifier. Also it's obvious that test accuracies of classifiers for plot, actor/actress, analysis, and thumb up or down, didn't increase much with adding more words. Therefore, the top 1000 words with respect to these classes are fixed as the final feature words. While for the rest of classifiers, they achieved top testing performances at the size of about 4000. Based on these findings, we use different feature sizes in our final classifiers."
],
[
"To prove the generalization of our classifiers, we use two of the TV series as training data and the rest as testing set. We compare them with classifiers trained without the replacement of generic tags like role_i or actor_j. So 3 sets of experiments are performed, and each are trained on top of Bayes, Logistic Regression and SVM. Average accuracies among them are reported as the performance measure for the sake of space limit. The results are shown in Table TABREF42 . “1”, “2” and “3” represent the TV series “The Journey of Flower”, “Nirvana in Fire” and “Good Time” respectively. In each cell, the left value represents accuracy of classifier without replacement of generic tags and winners are bolded.",
"From the above table, we can see with substitutions of generic tags in movie reviews, the top 5 classifiers have seen performance increase, which indicates the effectiveness of our method. However for the rest three classifiers, we didn't see an improvement and in some cases the performance seems decreased. This might be due to the fact that in the first five categories, roles' or actors' names are mentioned pretty frequently while the rest classes don't care much about these. But some specific names might be helpful in these categories' classification, so the performance has decreased in some degree."
],
[
"In this paper, a surrogate-based approach is proposed to make TV series review classification more generic among reviews from different TV series. Based on the topic modeling results, we define eight generic categories and manually label the collected TV series' reviews. Then with the help of Baidu Encyclopedia, TV series' specific information like roles' and actors' names are substituted by common tags within TV series domain. Our experimental results showed that such strategy combined with feature selection did improve the performance of classifications. Through this way, one may build classifiers on already collected TV series reviews, and then successfully classify those from new TV series. Our approach has broad implications on processing movie reviews as well. Since movie reviews and TV series reviews share many common characteristics, this approach can be easily applied to understand movie reviews and help movie producers to better process and classify consumers' movie review with higher accuracy."
]
],
"section_name": [
"Introduction",
"Related Work",
"Feature selection",
"Supervised Classifier",
"TV series Review Dataset",
"Chinese TV series Review Classification",
"Building Dataset",
"Basic Text Processing",
"Topic Modelling and Labeling",
"Feature Selection",
"Learning Classifiers",
"Experimental Results and Discussion",
"Category Determining by LDA",
"Feature Size Comparison",
"Generalization of Classifiers",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0ce12bb33cc1bb468cf9630eb9e66aae1ec974de",
"b206023fbf3558661419641948b7594079388d09"
],
"answer": [
{
"evidence": [
"What we are interested in are the reviews of the hottest or currently broadcasted TV series, so we select one of the most influential movie and TV series sharing websites in China, Douban. For every movie or TV series, you can find a corresponding section in it. For the sake of popularity, we choose “The Journey of Flower”, “Nirvana in Fire” and “Good Time” as parts of our movie review dataset, which are the hottest TV series from summer to fall 2015. Reviews of each episode have been collected for the sake of dataset comprehensiveness."
],
"extractive_spans": [],
"free_form_answer": "3",
"highlighted_evidence": [
"For the sake of popularity, we choose “The Journey of Flower”, “Nirvana in Fire” and “Good Time” as parts of our movie review dataset, which are the hottest TV series from summer to fall 2015."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"What we are interested in are the reviews of the hottest or currently broadcasted TV series, so we select one of the most influential movie and TV series sharing websites in China, Douban. For every movie or TV series, you can find a corresponding section in it. For the sake of popularity, we choose “The Journey of Flower”, “Nirvana in Fire” and “Good Time” as parts of our movie review dataset, which are the hottest TV series from summer to fall 2015. Reviews of each episode have been collected for the sake of dataset comprehensiveness."
],
"extractive_spans": [],
"free_form_answer": "Three tv series are considered.",
"highlighted_evidence": [
"For the sake of popularity, we choose “The Journey of Flower”, “Nirvana in Fire” and “Good Time” as parts of our movie review dataset, which are the hottest TV series from summer to fall 2015."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0ed8d59cb7a58d9dbe68a0cf20b6ca3c9eb336e2",
"de22862267938222e466fc885265abfd2cab4bf6"
],
"answer": [
{
"evidence": [
"Then we built the crawler written in python with the help of scrapy. Scrapy will create multiple threads to crawl information we need simultaneously, which saves us lots of time. For each episode, it collected both the short description of this episode and all the reviews under this post. The statistics of our TV series review dataset is shown in Table TABREF7 ."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 2) Dataset contains 19062 reviews from 3 tv series.",
"highlighted_evidence": [
"The statistics of our TV series review dataset is shown in Table TABREF7 .",
"The statistics of our TV series review dataset is shown in Table TABREF7 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"8cb23ec2f6ec04cfa0196b904142cda0100d8391",
"fffe117d3cada46cc7e013e5e4d30e1dab53df4f"
],
"answer": [
{
"evidence": [
"Let INLINEFORM0 be a set of Chinese movie reviews with no categorical information. The ultimate task of movie review classification is to label them into different predefined categories as INLINEFORM1 . Starting from scratch, we need to collect such review set INLINEFORM2 from an online review website and then manually label them into generic categories INLINEFORM3 . Based on the collected dataset, we can apply natural language processing techniques to get raw text features and further learn the classifiers. In the following subsections, we will go through and elaborate all the subtasks shown in Figure FIGREF5 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Starting from scratch, we need to collect such review set INLINEFORM2 from an online review website and then manually label them into generic categories INLINEFORM3 ."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The purpose of this research is to classify each review into one of the above 8 categories. In order to build reasonable classifiers, first we need to obtain a labeled dataset. Each of the TV series reviews was labeled by at least two individuals, and only those reviews with the same assigned label were selected in our training and testing data. This approach ensures that reviews with human biases are filtered out. As a result, we have 5000 for each TV series that matches the selection criteria."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Each of the TV series reviews was labeled by at least two individuals, and only those reviews with the same assigned label were selected in our training and testing data."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2f3b2a07ac4a4e00083fd910d831a44ee9401e9d",
"3bc124c4c54e9b42e7b91080b1990e8a667f7543"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Categories of Movie Reviews"
],
"extractive_spans": [],
"free_form_answer": "Plot of the TV series, Actor/actress, Role, Dialogue, Analysis, Platform, Thumb up or down, Noise or others",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Categories of Movie Reviews"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Based on the results from LDA, we carefully defined eight generic categories of movie reviews which are most representative in the dataset as shown in Table TABREF11 .",
"FLOAT SELECTED: Table 3: Categories of Movie Reviews"
],
"extractive_spans": [],
"free_form_answer": "Eight categories are: Plot of the TV series, Actor/actress actors, Role, Dialogue discussion, Analysis, Platform, Thumb up or down and Noise or others.",
"highlighted_evidence": [
"Based on the results from LDA, we carefully defined eight generic categories of movie reviews which are most representative in the dataset as shown in Table TABREF11 .",
"FLOAT SELECTED: Table 3: Categories of Movie Reviews"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How many TV series are considered?",
"How long is the dataset?",
"Is manual annotation performed?",
"What are the eight predefined categories?"
],
"question_id": [
"a9cae57f494deb0245b40217d699e9a22db0ea6e",
"0a736e0e3305a50d771dfc059c7d94b8bd27032e",
"283d358606341c399e369f2ba7952cd955326f73",
"818c85ee26f10622c42ae7bcd0dfbdf84df3a5e0"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Chinese Movies Box Office Statistics",
"Figure 1: Procedure of Building Generic Classifiers",
"Table 3: Categories of Movie Reviews",
"Table 4: Contingency Table for Word j",
"Figure 2: LDA results with 8 topics",
"Table 5: Performance of 8 Classifiers",
"Figure 3: Accuracy vs Feature size on 8 classifiers"
],
"file": [
"2-Table1-1.png",
"4-Figure1-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"9-Figure2-1.png",
"9-Table5-1.png",
"10-Figure3-1.png"
]
} | [
"How many TV series are considered?",
"How long is the dataset?",
"What are the eight predefined categories?"
] | [
[
"1611.02378-Building Dataset-0"
],
[
"1611.02378-Building Dataset-1"
],
[
"1611.02378-5-Table3-1.png",
"1611.02378-Topic Modelling and Labeling-1"
]
] | [
"Three tv series are considered.",
"Answer with content missing: (Table 2) Dataset contains 19062 reviews from 3 tv series.",
"Eight categories are: Plot of the TV series, Actor/actress actors, Role, Dialogue discussion, Analysis, Platform, Thumb up or down and Noise or others."
] | 298 |
1905.08392 | A Causality-Guided Prediction of the TED Talk Ratings from the Speech-Transcripts using Neural Networks | Automated prediction of public speaking performance enables novel systems for tutoring public speaking skills. We use the largest open repository---TED Talks---to predict the ratings provided by the online viewers. The dataset contains over 2200 talk transcripts and the associated meta information including over 5.5 million ratings from spontaneous visitors to the website. We carefully removed the bias present in the dataset (e.g., the speakers' reputations, popularity gained by publicity, etc.) by modeling the data generating process using a causal diagram. We use a word sequence based recurrent architecture and a dependency tree based recursive architecture as the neural networks for predicting the TED talk ratings. Our neural network models can predict the ratings with an average F-score of 0.77 which largely outperforms the competitive baseline method. | {
"paragraphs": [
[
"While the demand for physical and manual labor is gradually declining, there is a growing need for a workforce with soft skills. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes BIBREF0 , 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death BIBREF1 . To alleviate the situation, several automated systems are now available that can quantify behavioral data for participants to reflect on BIBREF2 . Predicting the viewers' ratings from the speech transcripts would enable these systems to generate feedback on the potential audience behavior.",
"Predicting human behavior, however, is challenging due to its huge variability and the way the variables interact with each other. Running Randomized Control Trials (RCT) to decouple each variable is not always feasible and also expensive. It is possible to collect a large amount of observational data due to the advent of content sharing platforms such as YouTube, Massive Open Online Courses (MOOC), or ted.com. However, the uncontrolled variables in the observational dataset always keep a possibility of incorporating the effects of the “data bias” into the prediction model. Recently, the problems of using biased datasets are becoming apparent. BIBREF3 showed that the error rates in the commercial face-detectors for the dark-skinned females are 43 times higher than the light-skinned males due to the bias in the training dataset. The unfortunate incident of Google's photo app tagging African-American people as “Gorilla” BIBREF4 also highlights the severity of this issue.",
"We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc.",
"For our analysis, we curate an observational dataset of public speech transcripts and other meta-data collected from the ted.com website. This website contains a large collection of high-quality public speeches that are freely available to watch, share, rate, and comment on. Every day, numerous people watch and annotate their perceptions about the talks. Our dataset contains 2231 public speech transcripts and over 5 million ratings from the spontaneous viewers of the talks. The viewers annotate each talk by 14 different labels—Beautiful, Confusing, Courageous, Fascinating, Funny, Informative, Ingenious, Inspiring, Jaw-Dropping, Long-winded, Obnoxious, OK, Persuasive, and Unconvincing.",
"We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels."
],
[
"In this section, we describe a few relevant prior arts on behavioral prediction."
],
[
"An example of human behavioral prediction research is to automatically grade essays, which has a long history BIBREF9 . Recently, the use of deep neural network based solutions BIBREF10 , BIBREF11 are becoming popular in this field. BIBREF12 proposed an adversarial approach for their task. BIBREF13 proposed a two-stage deep neural network based solution. Predicting helpfulness BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 in the online reviews is another example of predicting human behavior. BIBREF18 proposed a combination of Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) based framework to predict humor in the dialogues. Their method achieved an 8% improvement over a Conditional Random Field baseline. BIBREF19 analyzed the performance of phonological pun detection using various natural language processing techniques. In general, behavioral prediction encompasses numerous areas such as predicting outcomes in job interviews BIBREF20 , hirability BIBREF21 , presentation performance BIBREF22 , BIBREF23 , BIBREF24 etc. However, the practice of explicitly modeling the data generating process is relatively uncommon. In this paper, we expand the prior work by explicitly modeling the data generating process in order to remove the data bias."
],
[
"There is a limited amount of work on predicting the TED talk ratings. In most cases, TED talk performances are analyzed through introspection BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 .",
" BIBREF30 analyzed the TED Talks for humor detection. BIBREF31 analyzed the transcripts of the TED talks to predict audience engagement in the form of applause. BIBREF32 predicted user interest (engaging vs. non-engaging) from high-level visual features (e.g., camera angles) and audience applause. BIBREF33 proposed a sentiment-aware nearest neighbor model for a multimedia recommendation over the TED talks. BIBREF34 predicted the TED talk ratings from the linguistic features of the transcripts. This work is most similar to ours. However, we are proposing a new prediction framework using the Neural Networks."
],
[
"The data for this study was gathered from the ted.com website on November 15, 2017. We removed the talks published six months before the crawling date to make sure each talk has enough ratings for a robust analysis. More specifically, we filtered any talk that—"
]
],
"section_name": [
"Introduction",
"Background Research",
"Predicting Human Behavior",
"Predicting the TED Talk Performance",
"Dataset"
]
} | {
"answers": [
{
"annotation_id": [
"1de38232aaf182c56d0a141aab9478e9535a5708",
"4e617b6d6425e05ebe47e7b8be63d7e620c86df9"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5a8b21c3e8f01430cbd24fd7f7c480229439f61b",
"fbdd4d56fd5b9685702b5b1bc67ddd31f9558a69"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Recall for various rating categories. The reason we choose recall is for making comparison with the results reported by Weninger et al. (2013)."
],
"extractive_spans": [],
"free_form_answer": "Baseline performed better in \"Fascinating\" and \"Jaw-dropping\" categories.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Recall for various rating categories. The reason we choose recall is for making comparison with the results reported by Weninger et al. (2013)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: Recall for various rating categories. The reason we choose recall is for making comparison with the results reported by Weninger et al. (2013)."
],
"extractive_spans": [],
"free_form_answer": "Weninger et al. (SVM) model outperforms on the Fascinating category.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Recall for various rating categories. The reason we choose recall is for making comparison with the results reported by Weninger et al. (2013)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5fae30b2979a7e77414448ba01f3a93ddec3e938",
"bde17330bf0dd18ea1435344e07ab3104e7e1964"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Average F-score, Precision, Recall and Accuracy for various models. Due to the choice of the median thresholds, the precision, recall, F-score, and accuracy values are practically identical in our experiments.",
"FLOAT SELECTED: Table 4: Recall for various rating categories. The reason we choose recall is for making comparison with the results reported by Weninger et al. (2013)."
],
"extractive_spans": [],
"free_form_answer": "LinearSVM, LASSO, Weninger at al. (SVM)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Average F-score, Precision, Recall and Accuracy for various models. Due to the choice of the median thresholds, the precision, recall, F-score, and accuracy values are practically identical in our experiments.",
"FLOAT SELECTED: Table 4: Recall for various rating categories. The reason we choose recall is for making comparison with the results reported by Weninger et al. (2013)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Average F-score, Precision, Recall and Accuracy for various models. Due to the choice of the median thresholds, the precision, recall, F-score, and accuracy values are practically identical in our experiments."
],
"extractive_spans": [],
"free_form_answer": "LinearSVM, LASSO, Weninger et al.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Average F-score, Precision, Recall and Accuracy for various models. Due to the choice of the median thresholds, the precision, recall, F-score, and accuracy values are practically identical in our experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0d02e9d063b89d63ecab7b7fe9a81e9307814479",
"2b9e3be4995ac2c08ad0fcceee5d52e180fddf3f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels."
],
"extractive_spans": [],
"free_form_answer": "It performs better than other models predicting TED talk ratings.",
"highlighted_evidence": [
"Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0ea6460baac6b98d65924011708a178ca6523562",
"520f0f932480ced332deeeb9e6e6a6d4f5e35496"
],
"answer": [
{
"evidence": [
"We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc."
],
"extractive_spans": [],
"free_form_answer": "By confining to transcripts only and normalizing ratings to remove the effects of speaker's reputations, popularity gained by publicity, contemporary hot topics, etc.",
"highlighted_evidence": [
"We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"76a44f276e73e8b09a7621473ee1c7a45105db77",
"ae2fc98ba415cc7e01518899a7555bb99ffccc28"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"3dab1b35f94b5cb09bedbe9e4baf1ca24b7d8e5a",
"7fb51ed438fcb3192469cef9e267a3829413720f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"When the authors say their method largely outperforms the baseline, does this mean that the baseline performed better in some cases? If so, which ones?",
"What baseline method was used?",
"What was the motivation for using a dependency tree based recursive architecture?",
"How was a causal diagram used to carefully remove this bias?",
"How does publicity bias the dataset?",
"How do the speakers' reputations bias the dataset?"
],
"question_id": [
"37b0ee4a9d0df3ae3493e3b9114c3f385746da5c",
"bba70f3cf4ca1e0bb8c4821e3339c655cdf515d6",
"c5f9894397b1a0bf6479f5fd9ee7ef3e38cfd607",
"9f8c0e02a7a8e9ee69f4c1757817cde85c7944bd",
"6cbbedb34da50286f44a0f3f6312346e876e2be5",
"173060673cb15910cc310058bbb9750614abda52",
"98c8ed9019e43839ffb53a714bc37fbb1c28fe2c"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 2: Causal Diagram of the Data Generating Process of TED Talks",
"Figure 1: Counts of all the 14 different rating categories (labels) in the dataset",
"Table 1: Dataset Properties",
"Table 2: Correlation coefficients of each category of the ratings with the Total Views and the “Age” of Talks",
"Figure 3: An illustration of the Word Sequence Model",
"Figure 4: An illustration of the Dependency Tree-based Model",
"Figure 5: Effect of regularization on the training and development subset loss",
"Table 3: Average F-score, Precision, Recall and Accuracy for various models. Due to the choice of the median thresholds, the precision, recall, F-score, and accuracy values are practically identical in our experiments.",
"Table 4: Recall for various rating categories. The reason we choose recall is for making comparison with the results reported by Weninger et al. (2013)."
],
"file": [
"3-Figure2-1.png",
"3-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png",
"7-Table3-1.png",
"7-Table4-1.png"
]
} | [
"When the authors say their method largely outperforms the baseline, does this mean that the baseline performed better in some cases? If so, which ones?",
"What baseline method was used?",
"What was the motivation for using a dependency tree based recursive architecture?",
"How was a causal diagram used to carefully remove this bias?"
] | [
[
"1905.08392-7-Table4-1.png"
],
[
"1905.08392-7-Table3-1.png",
"1905.08392-7-Table4-1.png"
],
[
"1905.08392-Introduction-4"
],
[
"1905.08392-Introduction-2"
]
] | [
"Weninger et al. (SVM) model outperforms on the Fascinating category.",
"LinearSVM, LASSO, Weninger et al.",
"It performs better than other models predicting TED talk ratings.",
"By confining to transcripts only and normalizing ratings to remove the effects of speaker's reputations, popularity gained by publicity, contemporary hot topics, etc."
] | 299 |
1911.11161 | Emotional Neural Language Generation Grounded in Situational Contexts | Emotional language generation is one of the keys to human-like artificial intelligence. Humans use different type of emotions depending on the situation of the conversation. Emotions also play an important role in mediating the engagement level with conversational partners. However, current conversational agents do not effectively account for emotional content in the language generation process. To address this problem, we develop a language modeling approach that generates affective content when the dialogue is situated in a given context. We use the recently released Empathetic-Dialogues corpus to build our models. Through detailed experiments, we find that our approach outperforms the state-of-the-art method on the perplexity metric by about 5 points and achieves a higher BLEU metric score. | {
"paragraphs": [
[
"Rapid advancement in the field of generative modeling through the use of neural networks has helped advance the creation of more intelligent conversational agents. Traditionally these conversational agents are built using seq2seq framework that is widely used in the field of machine translation BIBREF0. However, prior research has shown that engaging with these agents produces dull and generic responses whilst also being inconsistent with the emotional tone of conversation BIBREF0, BIBREF1. These issues also affect engagement with the conversational agent, that leads to short conversations BIBREF2. Apart from producing engaging responses, understanding the situation and producing the right emotional response to a that situation is another desirable trait BIBREF3.",
"Emotions are intrinsic to humans and help in creation of a more engaging conversation BIBREF4. Recent work has focused on approaches towards incorporating emotion in conversational agents BIBREF5, BIBREF6, BIBREF7, BIBREF8, however these approaches are focused towards seq2seq task. We approach this problem of emotional generation as a form of transfer learning, using large pretrained language models. These language models, including BERT, GPT-2 and XL-Net, have helped achieve state of the art across several natural language understanding tasks BIBREF9, BIBREF10, BIBREF11. However, their success in language modeling tasks have been inconsistent BIBREF12. In our approach, we use these pretrained language models as the base model and perform transfer learning to fine-tune and condition these models on a given emotion. This helps towards producing more emotionally relevant responses for a given situation. In contrast, the work done by Rashkin et al. BIBREF3 also uses large pretrained models but their approach is from the perspective of seq2seq task.",
"Our work advances the field of conversational agents by applying the transfer learning approach towards generating emotionally relevant responses that is grounded on emotion and situational context. We find that our fine-tuning based approach outperforms the current state of the art approach on the automated metrics of the BLEU and perplexity. We also show that transfer learning approach helps produce well crafted responses on smaller dialogue corpus."
],
[
"Consider the example show in Table TABREF1 that shows a snippet of the conversation between a speaker and a listener that is grounded in a situation representing a type of emotion. Our goal is to produce responses to conversation that are emotionally appropriate to the situation and emotion portrayed.",
"We approach this problem through a language modeling approach. We use large pre-trained language model as the base model for our response generation. This model is based on the transformer architecture and makes uses of the multi-headed self-attention mechanism to condition itself of the previously seen tokens to its left and produces a distribution over the target tokens. Our goal is to make the language model $p(y)=p(y_1,y_2,....,y_t;\\theta )$ learn on new data and estimate the conditional probability $p(y|x)$. Radford et al. BIBREF10 demonstrated the effectiveness of language models to learn from a zero-shot approach in a multi-task setting. We take inspiration from this approach to condition our model on the task-specific variable $p(y_t|x,y_{< t})$, where $x$ is the task-specific variable, in this case the emotion label. We prepend the conditional variable (emotion, situational context) to the dialogue similar to the approach from Wolf et al BIBREF13. We ensure that that the sequences are separated by special tokens."
],
[
"In our experiments we use the Empathetic Dialogues dataset made available by Rashkin et al. BIBREF3. Empathetic dialogues is crowdsourced dataset that contains dialogue grounded in a emotional situation. The dataset comprises of 32 emotion labels including surprised, excited, angry, proud, grateful. The speaker initiates the conversation using the grounded emotional situation and the listener responds in an appropriate manner.Table TABREF4 provides the basic statistics of the corpus."
],
[
"In all our experiments, we use the GPT-2 pretrained language model. We use the publicly available model containing 117M parameters with 12 layers; each layer has 12 heads. We implemented our models using PyTorch Transformers. The input sentences are tokenized using byte-pair encoding(BPE) BIBREF14 (vocabulary size of 50263). While decoding, we use the nucleus sampling ($p=0.9$) approach instead of beam-search to overcome the drawbacks of beam search BIBREF15, BIBREF16. All our models are trained on a single TitanV GPU and takes around 2 hours to fine-tune the model. The fine-tuned models along with the configuration files and the code will be made available at: https://github.com/sashank06/CCNLG-emotion."
],
[
"Evaluating the quality of responses in open domain situations where the goal is not defined is an important area of research. Researchers have used methods such as BLEU , METEOR BIBREF17, ROUGE BIBREF18 from machine translation and text summarization BIBREF19 tasks. BLEU and METEOR are based on word overlap between the proposed and ground truth responses; they do not adequately account for the diversity of responses that are possible for a given input utterance and show little to no correlation with human judgments BIBREF19. We report on the BLEU BIBREF20 and Perplexity (PPL) metric to provide a comparison with the current state-of-the-art methods. We also report our performance using other metrics such as length of responses produced by the model. Following, Mei et al BIBREF21, we also report the diversity metric that helps us measure the ability of the model to promote diversity in responses BIBREF22. Diversity is calculated as the as the number of distinct unigrams in the generation scaled by the total number of generated tokens BIBREF21, BIBREF1. We report on two additional automated metrics of readability and coherence. Readability quantifies the linguistic quality of text and the difficulty of the reader in understanding the text BIBREF23. We measure readability through the Flesch Reading Ease (FRE) BIBREF24 which computes the number of words, syllables and sentences in the text. Higher readability scores indicate that utterance is easier to read and comprehend. Similarly, coherence measures the ability of the dialogue system to produce responses consistent with the topic of conversation. To calculate coherence, we use the method proposed by Dziri et al. BIBREF25."
],
[
"We first compare the performance of our approach with the baseline results obtained from Rashkin et al. BIBREF3 that uses a full transformer architecture BIBREF26, consisting of an encoder and decoder. Table TABREF9 provides a comparison of our approach with to the baseline approach. In Table TABREF9, we refer our “Our Model Fine-Tuned” as the baseline fine-tuned GPT-2 model trained on the dialogue and “Our-model Emo-prepend” as the GPT-2 model that is fine-tuned on the dialogues but also conditioned on the emotion displayed in the conversation. We find that fine-tuning the GPT-2 language model using a transfer learning approach helps us achieve a lower perplexity and a higher BLEU scores. The results from our approach are consistent with the empirical study conducted by Edunov et al BIBREF27 that demonstrate the effectiveness of the using pre-trained model diminishes when added to the decoder network in an seq2seq approach. We also perform a comparison between our two models on the metrics of length, diversity, readability and coherence. We find that our baseline model produces less diverse responses compared to when the model is conditioned on emotion. We find that the our emo-prepend model also higher a slightly higher readability score that our baseline model."
],
[
"To assess the quality of generations, we conducted a MTurk human evaluation. We recruited a total of 15 participants and each participant was asked to evaluate 25 randomly sampled outputs from the test set on three metrics:",
"Readability - Is the response easy to understand, fluent and grammatical and does not have any consecutive repeating words.",
"Coherence - Is the response relevant to the context of the conversation.",
"Emotional Appropriateness- Does the response convey emotion suitable to the context of the conversation?",
"Table TABREF15 shows the results obtained from the human evaluation comparing the performance of our fine-tuned, emotion pre-pend model to the ground-truth response. We find that our fine-tuned model outperforms the emo-prepend on all three metrics from the ratings provided by the human ratings."
],
[
"The area of dialogue systems has been studied extensively in both open-domain BIBREF28 and goal-oriented BIBREF29 situations. Extant approaches towards building dialogue systems has been done predominantly through the seq2seq framework BIBREF0. However, prior research has shown that these systems are prone to producing dull and generic responses that causes engagement with the human to be affected BIBREF0, BIBREF2. Researchers have tackled this problem of dull and generic responses through different optimization function such as MMI BIBREF30 and through reinforcement learning approachesBIBREF31. Alternative approaches towards generating more engaging responses is by grounding them in personality of the speakers that enables in creating more personalized and consistent responses BIBREF1, BIBREF32, BIBREF13.",
"Several other works have focused on creating more engaging responses by producing affective responses. One of the earlier works to incorporate affect through language modeling is the work done by Ghosh et al. BIBREF8. This work leverages the LIWC BIBREF33 text analysis platform for affective features. Alternative approaches of inducing emotion in generated responses from a seq2seq framework include the work done by Zhou et alBIBREF6 that uses internal and external memory, Asghar et al. BIBREF5 that models emotion through affective embeddings and Huang et al BIBREF7 that induce emotion through concatenation with input sequence. More recently, introduction of transformer based approaches have helped advance the state of art across several natural language understanding tasks BIBREF26. These transformers models have also helped created large pre-trained language models such as BERT BIBREF9, XL-NET BIBREF11, GPT-2 BIBREF10. However, these pre-trained models show inconsistent behavior towards language generation BIBREF12."
],
[
"In this work, we study how pre-trained language models can be adopted for conditional language generation on smaller datasets. Specifically, we look at conditioning the pre-trained model on the emotion of the situation produce more affective responses that are appropriate for a particular situation. We notice that our fine-tuned and emo-prepend models outperform the current state of the art approach relative to the automated metrics such as BLEU and perplexity on the validation set. We also notice that the emo-prepend approach does not out perform a simple fine tuning approach on the dataset. We plan to investigate the cause of this in future work from the perspective of better experiment design for evaluation BIBREF34 and analyzing the models focus when emotion is prepended to the sequence BIBREF35. Along with this, we also notice other drawbacks in our work such as not having an emotional classifier to predict the outcome of the generated sentence, which we plan to address in future work."
],
[
"This work was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No FA8650-18-C-7881. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of AFRL, DARPA, or the U.S. Government. We thank the anonymous reviewers for the helpful feedback."
]
],
"section_name": [
"Introduction",
"Approach",
"Experiments ::: Data",
"Experiments ::: Implementation",
"Experiments ::: Metrics",
"Results ::: Automated Metrics",
"Results ::: Qualitative Evaluation",
"Related Work",
"Conclusion and Discussion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0d4d38fc07283bafbcee1f48af01a8ba00467b20",
"97b6a3c8c2619b58db2d9f4d4b3dbb04392025d6"
],
"answer": [
{
"evidence": [
"We first compare the performance of our approach with the baseline results obtained from Rashkin et al. BIBREF3 that uses a full transformer architecture BIBREF26, consisting of an encoder and decoder. Table TABREF9 provides a comparison of our approach with to the baseline approach. In Table TABREF9, we refer our “Our Model Fine-Tuned” as the baseline fine-tuned GPT-2 model trained on the dialogue and “Our-model Emo-prepend” as the GPT-2 model that is fine-tuned on the dialogues but also conditioned on the emotion displayed in the conversation. We find that fine-tuning the GPT-2 language model using a transfer learning approach helps us achieve a lower perplexity and a higher BLEU scores. The results from our approach are consistent with the empirical study conducted by Edunov et al BIBREF27 that demonstrate the effectiveness of the using pre-trained model diminishes when added to the decoder network in an seq2seq approach. We also perform a comparison between our two models on the metrics of length, diversity, readability and coherence. We find that our baseline model produces less diverse responses compared to when the model is conditioned on emotion. We find that the our emo-prepend model also higher a slightly higher readability score that our baseline model."
],
"extractive_spans": [
"Rashkin et al. BIBREF3 "
],
"free_form_answer": "",
"highlighted_evidence": [
"We first compare the performance of our approach with the baseline results obtained from Rashkin et al. BIBREF3 that uses a full transformer architecture BIBREF26, consisting of an encoder and decoder. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Emotions are intrinsic to humans and help in creation of a more engaging conversation BIBREF4. Recent work has focused on approaches towards incorporating emotion in conversational agents BIBREF5, BIBREF6, BIBREF7, BIBREF8, however these approaches are focused towards seq2seq task. We approach this problem of emotional generation as a form of transfer learning, using large pretrained language models. These language models, including BERT, GPT-2 and XL-Net, have helped achieve state of the art across several natural language understanding tasks BIBREF9, BIBREF10, BIBREF11. However, their success in language modeling tasks have been inconsistent BIBREF12. In our approach, we use these pretrained language models as the base model and perform transfer learning to fine-tune and condition these models on a given emotion. This helps towards producing more emotionally relevant responses for a given situation. In contrast, the work done by Rashkin et al. BIBREF3 also uses large pretrained models but their approach is from the perspective of seq2seq task.",
"We first compare the performance of our approach with the baseline results obtained from Rashkin et al. BIBREF3 that uses a full transformer architecture BIBREF26, consisting of an encoder and decoder. Table TABREF9 provides a comparison of our approach with to the baseline approach. In Table TABREF9, we refer our “Our Model Fine-Tuned” as the baseline fine-tuned GPT-2 model trained on the dialogue and “Our-model Emo-prepend” as the GPT-2 model that is fine-tuned on the dialogues but also conditioned on the emotion displayed in the conversation. We find that fine-tuning the GPT-2 language model using a transfer learning approach helps us achieve a lower perplexity and a higher BLEU scores. The results from our approach are consistent with the empirical study conducted by Edunov et al BIBREF27 that demonstrate the effectiveness of the using pre-trained model diminishes when added to the decoder network in an seq2seq approach. We also perform a comparison between our two models on the metrics of length, diversity, readability and coherence. We find that our baseline model produces less diverse responses compared to when the model is conditioned on emotion. We find that the our emo-prepend model also higher a slightly higher readability score that our baseline model."
],
"extractive_spans": [],
"free_form_answer": "For particular Empathetic-Dialogues corpus released Raskin et al. is state of the art (as well as the baseline) approach. Two terms are used interchangeably in the paper.",
"highlighted_evidence": [
"We approach this problem of emotional generation as a form of transfer learning, using large pretrained language models. These language models, including BERT, GPT-2 and XL-Net, have helped achieve state of the art across several natural language understanding tasks BIBREF9, BIBREF10, BIBREF11. However, their success in language modeling tasks have been inconsistent BIBREF12. In our approach, we use these pretrained language models as the base model and perform transfer learning to fine-tune and condition these models on a given emotion. This helps towards producing more emotionally relevant responses for a given situation. In contrast, the work done by Rashkin et al. BIBREF3 also uses large pretrained models but their approach is from the perspective of seq2seq task.",
"We first compare the performance of our approach with the baseline results obtained from Rashkin et al. BIBREF3 that uses a full transformer architecture BIBREF26, consisting of an encoder and decoder."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"What is the state-of-the-art approach?"
],
"question_id": [
"50c441a9cc7345a0fa408d1ce2e13f194c1e82a8"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 2: Statistics of Empathetic Dialogue dataset used in our experiments",
"Table 1: Example of conversations between a speaker and a listener",
"Table 3: Comparison of the performance of our model to the baseline model proposed by Rashkin et al (2019) across a variety of automated metrics to provide a thorough comparison. x indicates that these metrics were not provided in the Rashkin et al (2019) work.",
"Table 4: Example generations from our two model along with the ground truth responses.",
"Table 5: Human ratings demonstrating a comparison between our models to the ground truth responses on the metrics of readability, coherence and emotional appropriateness"
],
"file": [
"2-Table2-1.png",
"2-Table1-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png"
]
} | [
"What is the state-of-the-art approach?"
] | [
[
"1911.11161-Results ::: Automated Metrics-0",
"1911.11161-Introduction-1"
]
] | [
"For particular Empathetic-Dialogues corpus released Raskin et al. is state of the art (as well as the baseline) approach. Two terms are used interchangeably in the paper."
] | 300 |
1710.07695 | Verb Pattern: A Probabilistic Semantic Representation on Verbs | Verbs are important in semantic understanding of natural language. Traditional verb representations, such as FrameNet, PropBank, VerbNet, focus on verbs' roles. These roles are too coarse to represent verbs' semantics. In this paper, we introduce verb patterns to represent verbs' semantics, such that each pattern corresponds to a single semantic of the verb. First we analyze the principles for verb patterns: generality and specificity. Then we propose a nonparametric model based on description length. Experimental results prove the high effectiveness of verb patterns. We further apply verb patterns to context-aware conceptualization, to show that verb patterns are helpful in semantic-related tasks. | {
"paragraphs": [
[
"Verb is crucial in sentence understanding BIBREF0 , BIBREF1 . A major issue of verb understanding is polysemy BIBREF2 , which means that a verb has different semantics or senses when collocating with different objects. In this paper, we only focus on verbs that collocate with objects. As illustrated in Example SECREF1 , most verbs are polysemous. Hence, a good semantic representation of verbs should be aware of their polysemy.",
"Example 1 (Verb Polysemy) eat has the following senses:",
"Many typical verb representations, including FrameNet BIBREF3 , PropBank BIBREF4 , and VerbNet BIBREF5 , describe verbs' semantic roles (e.g. ingestor and ingestibles for “eat”). However, semantic roles in general are too coarse to differentiate a verb's fine-grained semantics. A verb in different phrases can have different semantics but similar roles. In Example SECREF1 , both “eat”s in “eat breakfast” and “eat apple” have ingestor. But they have different semantics.",
"The unawareness of verbs' polysemy makes traditional verb representations unable to fully understand the verb in some applications. In sentence I like eating pitaya, people directly know “pitaya” is probably one kind of food since eating a food is the most fundamental semantic of “eat”. This enables context-aware conceptualization of pitaya to food concept. But by only knowing pitaya's role is the “ingestibles”, traditional representations cannot tell if pitaya is a food or a meal.",
"Verb Patterns We argue that verb patterns (available at http://kw.fudan.edu.cn/verb) can be used to represent more fine-grained semantics of a verb. We design verb patterns based on two word collocations principles proposed in corpus linguistics BIBREF6 : idiom principle and open-choice principle. Following the principles, we designed two types of verb patterns.",
"According to the above definitions, we use verb patterns to represent the verb's semantics. Phrases assigned to the same pattern have similar semantics, while those assigned to different patterns have different semantics. By verb patterns, we know the “pitaya” in I like eating pitaya is a food by mapping “eat pitaya” to “eat $ INLINEFORM0 food”. On the other hand, idiom patterns specify which phrases should not be conceptualized. We list verb phrases from Example SECREF1 and their verb patterns in Table TABREF7 . And we will show how context-aware conceptualization benefits from our verb patterns in the application section.",
"Thus, our problem is how to generate conceptualized patterns and idiom patterns for verbs. We use two public data sets for this purpose: Google Syntactic N-Grams (http://commondatastorage.googleapis.com/books/syntactic -ngrams/index.html) and Probase BIBREF7 . Google Syntactic N-grams contains millions of verb phrases, which allows us to mine rich patterns for verbs. Probase contains rich concepts for instances, which enables the conceptualization for objects. Thus, our problem is given a verb INLINEFORM0 and a set of its phrases, generating a set of patterns (either conceptualized patterns or idiom patterns) for INLINEFORM1 . However, the pattern generation for verbs is non-trivial. In general, the most critical challenge we face is the trade-off between generality and specificity of the generated patterns, as explained below."
],
[
"We try to answer the question: “what are good verb patterns to summarize a set of verb phrases?” This is hard because in general we have multiple candidate verb patterns. Intuitively, good verb patterns should be aware of the generality and specificity.",
"Generality In general, we hope to use fewer patterns to represent the verbs' semantics. Otherwise, the extracted patterns will be trivial. Consider one extreme case where all phrases are considered as idiom phrases. Such idiom patterns obviously make no sense since idioms in general are a minority of the verb phrases.",
"Example 2 In Fig FIGREF9 , (eat $ INLINEFORM0 meal) is obviously better than the three patterns (eat $ INLINEFORM1 breakfast + eat $ INLINEFORM2 lunch+ eat $ INLINEFORM3 dinner). The former case provides a more general representation.",
"Specificity On the other hand, we expect the generated patterns are specific enough, or the results might be trivial. As shown in Example SECREF11 , we can generate the objects into some high-level concepts such as activity, thing, and item. These conceptualized patterns in general are too vague to characterize a verb's fine-grained semantic.",
"Example 3 For phrases in Fig FIGREF9 , eat $ INLINEFORM0 activity is more general than eat $ INLINEFORM1 meal. As a result, some wrong verb phrases such as eat shopping or each fishing can be recognized as a valid instance of phrases for eat. Instead, eat $ INLINEFORM2 meal has good specificity. This is because breakfast, lunch, dinner are three typical instances of meal, and meal has few other instances.",
"Contributions Generality and specificity obviously contradict to each other. How to find a good trade-off between them is the main challenge in this paper. We will use minimum description length (MDL) as the basic framework to reconcile the two objectives. More specifically, our contribution in this paper can be summarized as follows:",
"We proposed verb patterns, a novel semantic representations of verb. We proposed two types of verb patterns: conceptualized patterns and idiom patterns. The verb pattern is polysemy-aware so that we can use it to distinguish different verb semantics.",
"We proposed the principles for verb pattern extraction: generality and specificity. We show that the trade-off between them is the main challenge of pattern generation. We further proposed an unsupervised model based on minimum description length to generate verb patterns.",
"We conducted extensive experiments. The results verify the effectiveness of our model and algorithm. We presented the applications of verb patterns in context-aware conceptualization. The application justifies the effectiveness of verb patterns to represent verb semantics."
],
[
"In this section, we define the problem of extracting patterns for verb phrases. The goal of pattern extraction is to compute: (1) the pattern for each verb phrase; (2) the pattern distribution for each verb. Next, we first give some preliminary definitions. Then we formalize our problem based on minimum description length. The patterns of different verbs are independent from each other. Hence, we only need to focus on each individual verb and its phrases. In the following text, we discuss our solution with respect to a given verb."
],
[
"First, we formalize the definition of verb phrase, verb pattern, and pattern assignment. A verb phrase INLINEFORM0 is in the form of verb + object (e.g. “eat apple”). We denote the object in INLINEFORM1 as INLINEFORM2 . A verb pattern is either an idiom pattern or a conceptualized pattern. Idiom Pattern is in the form of verb $ INLINEFORM3 object (e.g. eat $ INLINEFORM4 humble pie). Conceptualized Pattern is in the form of verb $ INLINEFORM5 concept (e.g. eat $ INLINEFORM6 meal). We denote the concept in a conceptualized pattern INLINEFORM7 as INLINEFORM8 .",
"Definition 1 (Pattern Assignment) A pattern assignment is a function INLINEFORM0 that maps an arbitrary phrase INLINEFORM1 to its pattern INLINEFORM2 . INLINEFORM3 means the pattern of INLINEFORM4 is INLINEFORM5 . The assignment has two constraints:",
"For an idiom pattern verb $ INLINEFORM0 object, only phrase verb object can map to it.",
"For a conceptualized pattern verb $ INLINEFORM0 concept, a phrase verb object can map to it only if the object belongs to the concept in Probase BIBREF7 .",
"An example of verb phrases, verb patterns, and a valid pattern assignment is shown in Table TABREF7 .",
"We assume the phrase distribution is known (in our experiments, such distribution is derived from Google Syntactic Ngram). So the goal of this paper is to find INLINEFORM0 . With INLINEFORM1 , we can easily compute the pattern distribution INLINEFORM2 by: DISPLAYFORM0 ",
", where INLINEFORM0 is the probability to observe phrase INLINEFORM1 in all phrases of the verb of interest. Note that the second equation holds due to the obvious fact that INLINEFORM2 when INLINEFORM3 . INLINEFORM4 can be directly estimated as the ratio of INLINEFORM5 's frequency as in Eq EQREF45 ."
],
[
"Next, we formalize our model based on minimum description length. We first discuss our intuition to use Minimum Description Length (MDL) BIBREF8 . MDL is based on the idea of data compression. Verb patterns can be regarded as a compressed representation of verb phrases. Intuitively, if the pattern assignment provides a compact description of phrases, it captures the underlying verb semantics well.",
"Given verb phrases, we seek for the best assignment function INLINEFORM0 that minimizes the code length of phrases. Let INLINEFORM1 be the code length derived by INLINEFORM2 . The problem of verb pattern assignment thus can be formalized as below:",
"Problem Definition 1 (Pattern Assignment) Given the phrase distribution INLINEFORM0 , find the pattern assignment INLINEFORM1 , such that INLINEFORM2 is minimized: DISPLAYFORM0 ",
"We use a two-part encoding schema to encode each phrase. For each phrase INLINEFORM0 , we need to encode its pattern INLINEFORM1 (let the code length be INLINEFORM2 ) as well as the INLINEFORM3 itself given INLINEFORM4 (let the code length be INLINEFORM5 ). Thus, we have DISPLAYFORM0 ",
"Here INLINEFORM0 is the code length of INLINEFORM1 and consists of INLINEFORM2 and INLINEFORM3 .",
" INLINEFORM0 : Code Length for Patterns To encode INLINEFORM1 's pattern INLINEFORM2 , we need: DISPLAYFORM0 ",
"bits, where INLINEFORM0 is computed by Eq EQREF19 .",
" INLINEFORM0 : Code Length for Phrase given Pattern After knowing its pattern INLINEFORM1 , we use INLINEFORM2 , the probability of INLINEFORM3 given INLINEFORM4 to encode INLINEFORM5 . INLINEFORM6 is computed from Probase BIBREF7 and is treated as a prior. Thus, we encode INLINEFORM7 with code length INLINEFORM8 . To compute INLINEFORM9 , we consider two cases:",
"Case 1: INLINEFORM0 is an idiom pattern. Since each idiom pattern has only one phrase, we have INLINEFORM1 .",
"Case 2: INLINEFORM0 is a conceptualized pattern. In this case, we only need to encode the object INLINEFORM1 given the concept in INLINEFORM2 . We leverage INLINEFORM3 , the probability of object INLINEFORM4 given concept INLINEFORM5 (which is given by the isA taxonomy), to encode the phrase. We will give more details about the probability computation in the experimental settings.",
"Thus, we have DISPLAYFORM0 ",
"Total Length We sum up the code length for all phrases to get the total code length INLINEFORM0 for assignment INLINEFORM1 : DISPLAYFORM0 ",
" Note that here we introduce the parameter INLINEFORM0 to control the relative importance of INLINEFORM1 and INLINEFORM2 . Next, we will explain that INLINEFORM3 actually reflects the trade-off between the generality and the specificity of the patterns."
],
[
"Next, we elaborate the rationality of our model by showing how the model reflects principles of verb patterns (i.e. generality and specificity). For simplicity, we define INLINEFORM0 and INLINEFORM1 as below to denote the total code length for patterns and total code length for phrases themselves: DISPLAYFORM0 DISPLAYFORM1 ",
"Generality We show that by minimizing INLINEFORM0 , our model can find general patterns. Let INLINEFORM1 be all the patterns that INLINEFORM2 maps to and INLINEFORM3 be the set of each phrase INLINEFORM4 such that INLINEFORM5 . Due to Eq EQREF19 and Eq EQREF30 , we have: DISPLAYFORM0 ",
"So INLINEFORM0 is the entropy of the pattern distribution. Minimizing the entropy favors the assignment that maps phrases to fewer patterns. This satisfies the generality principle.",
"Specificity We show that by minimizing INLINEFORM0 , our model finds specific patterns. The inner part in the last equation of Eq EQREF33 actually is the cross entropy between INLINEFORM1 and INLINEFORM2 . Thus INLINEFORM3 has a small value if INLINEFORM4 and INLINEFORM5 have similar distributions. This reflects the specificity principle. DISPLAYFORM0 "
],
[
"In this section, we propose an algorithm based on simulated annealing to solve Problem SECREF21 . We also show how we use external knowledge to optimize the idiom patterns.",
"We adopted a simulated annealing (SA) algorithm to compute the best pattern assignment INLINEFORM0 . The algorithm proceeds as follows. We first pick a random assignment as the initialization (initial temperature). Then, we generate a new assignment and evaluate it. If it is a better assignment, we replace the previous assignment with it; otherwise we accept it with a certain probability (temperature reduction). The generation and replacement step are repeated until no change occurs in the last INLINEFORM1 iterations (termination condition)."
],
[
"Verb Phrase Data The pattern assignment uses the phrase distribution INLINEFORM0 . To do this, we use the “English All” dataset in Google Syntactic N-Grams. The dataset contains counted syntactic ngrams extracted from the English portion of the Google Books corpus. It contains 22,230 different verbs (without stemming), and 147,056 verb phrases. For a fixed verb, we compute the probability of phrase INLINEFORM1 by: DISPLAYFORM0 ",
", where INLINEFORM0 is the frequency of INLINEFORM1 in the corpus, and the denominator sums over all phrases of this verb.",
"IsA Relationship We use Probase to compute the probability of an entity given a concept INLINEFORM0 , as well as the probability of the concept given an entity INLINEFORM1 : DISPLAYFORM0 ",
",where INLINEFORM0 is the frequency that INLINEFORM1 and INLINEFORM2 co-occur in Probase.",
"Test data We use two data sets to show our solution can achieve consistent effectiveness on both short text and long text. The short text data set contains 1.6 millions of tweets from Twitter BIBREF9 . The long text data set contains 21,578 news articles from Reuters BIBREF10 ."
],
[
"Now we give an overview of our extracted verb patterns. For all 22,230 verbs, we report the statistics for the top 100 verbs of the highest frequency. After filtering noisy phrases with INLINEFORM0 , each verb has 171 distinct phrases and 97.2 distinct patterns on average. 53% phrases have conceptualized patterns. 47% phrases have idiom patterns. In Table TABREF48 , we list 5 typical verbs and their top patterns. The case study verified that (1) our definition of verb pattern reflects verb's polysemy; (2) most verb patterns we found are meaningful."
],
[
"To evaluate the effectiveness of our pattern summarization approach, we report two metrics: (1) ( INLINEFORM0 ) how much of the verb phrases in natural language our solution can find corresponding patterns (2) ( INLINEFORM1 ) how much of the phrases and their corresponding patterns are correctly matched? We compute the two metrics by: DISPLAYFORM0 ",
",where INLINEFORM0 is the number of phrases in the test data for which our solution finds corresponding patterns, INLINEFORM1 is the total number of phrases, INLINEFORM2 is the number of phrases whose corresponding patterns are correct. To evaluate INLINEFORM3 , we randomly selected 100 verb phrases from the test data and ask volunteers to label the correctness of their assigned patterns. We regard a phrase-pattern matching is incorrect if it's either too specific or too general (see examples in Fig FIGREF9 ). For comparison, we also tested two baselines for pattern summarization:",
"Idiomatic Baseline (IB) We treat each verb phrase as a idiom.",
"Conceptualized Baseline (CB) For each phrase, we assign it to a conceptualized pattern. For object INLINEFORM0 , we choose the concept with the highest probability, i.e. INLINEFORM1 , to construct the pattern.",
"Verb patterns cover 64.3% and 70% verb phrases in Tweets and News, respectively. Considering the spelling errors or parsing errors in Google N-Gram data, the coverage in general is acceptable. We report the precision of the extracted verb patterns (VP) with the comparisons to baselines in Fig FIGREF53 . The results show that our approach (VP) has a significant priority over the baselines in terms of precision. The result suggests that both conceptualized patterns and idiom patterns are necessary for the semantic representation of verbs."
],
[
"As suggested in the introduction, we can use verb patterns to improve context-aware conceptualization (i.e. to extract an entity's concept while considering its context). We do this by incorporating the verb patterns into a state-of-the-art entity-based approach BIBREF11 .",
"Entity-based approach The approach conceptualizes an entity INLINEFORM0 by fully employing the mentioned entities in the context. Let INLINEFORM1 be entities in the context. We denote the probability that INLINEFORM2 is the concept of INLINEFORM3 given the context INLINEFORM4 as INLINEFORM5 . By assuming all these entities are independent for the given concept, we compute INLINEFORM6 by: DISPLAYFORM0 ",
"Our approach We add the verb in the context as an additional feature to conceptualize INLINEFORM0 when INLINEFORM1 is an object of the verb. From verb patterns, we can derive INLINEFORM2 , which is the probability to observe the conceptualized pattern with concept INLINEFORM3 in all phrases of verb INLINEFORM4 . Thus, the probability of INLINEFORM5 conditioned on INLINEFORM6 given the context INLINEFORM7 as well as verb INLINEFORM8 is INLINEFORM9 . Similar to Eq EQREF54 , we compute it by: DISPLAYFORM0 ",
" Note that if INLINEFORM0 is observed in Google Syntactic N-Grams, which means that we have already learned its pattern, then we can use these verb patterns to do the conceptualization. That is, if INLINEFORM1 is mapped to a conceptualized pattern, we use the pattern's concept as the conceptualization result. If INLINEFORM2 is an idiom pattern, we stop the conceptualization.",
"Settings and Results For the two datasets used in the experimental section, we use both approaches to conceptualize objects in all verb phrases. Then, we select the concept with the highest probability as the label of the object. We randomly select 100 phrases for which the two approaches generate different labels. For each difference, we manually label if our result is better than, equal to, or worse than the competitor. Results are shown in Fig FIGREF56 . On both datasets, the precisions are significantly improved after adding verb patterns. This verifies that verb patterns are helpful in semantic understanding tasks."
],
[
"Traditional Verb Representations We compare verb patterns with traditional verb representations BIBREF12 . FrameNet BIBREF3 is built upon the idea that the meanings of most words can be best understood by semantic frames BIBREF13 . Semantic frame is a description of a type of event, relation, or entity and the participants in it. And each semantic frame uses frame elements (FEs) to make simple annotations. PropBank BIBREF4 uses manually labeled predicates and arguments of semantic roles, to capture the precise predicate-argument structure. The predicates here are verbs, while arguments are other roles of verb. To make PropBank more formalized, the arguments always consist of agent, patient, instrument, starting point and ending point. VerbNet BIBREF5 classifies verbs according to their syntax patterns based on Levin classes BIBREF14 . All these verb representations focus on different roles of the verb instead of the semantics of verb. While different verb semantics might have similar roles, the existing representations cannot fully characterize the verb's semantics.",
"Conceptualization One typical application of our work is context-aware conceptualization, which motivates the survey of the conceptualization. Conceptualization determines the most appropriate concept for an entity.Traditional text retrieval based approaches use NER BIBREF15 for conceptualization. But NER usually has only a few predefined coarse concepts. Wu et al. built a knowledge base with large-scale lexical information to provide richer IsA relations BIBREF7 . Using IsA relations, context-aware conceptualization BIBREF16 performs better. Song et al. BIBREF11 proposed a conceptualization mechanism by Naive Bayes. And Wen et al. BIBREF17 proposed a state-of-the-art model by combining co-occurrence network, IsA network and concept clusters.",
"Semantic Composition We represent verb phrases by verb patterns. while semantic composition works aim to represent the meaning of an arbitrary phrase as a vector or a tree. Vector-space model is widely used to represent the semantic of single word. A straightforward approach to characterize the semantic of a phrase thus is averaging the vectors over all the phrase's words BIBREF18 . But this approach certainly ignores the syntactic relation BIBREF19 between words. Socher et al. BIBREF20 represent the syntactic relation by a binary tree, which is fed into a recursive neural network together with the words' vectors. Recently, word2vec BIBREF21 shows its advantage in single word representation. Mikolov et al. BIBREF22 further revise it to make word2vec capable for phrase vector. In summary, none of these works uses the idiom phrases of verbs and concept of verb's object to represent the semantics of verbs."
],
[
"Verbs' semantics are important in text understanding. In this paper, we proposed verb patterns, which can distinguish different verb semantics. We built a model based on minimum description length to trade-off between generality and specificity of verb patterns. We also proposed a simulated annealing based algorithm to extract verb patterns. We leverage patterns' typicality to accelerate the convergence by pattern-based candidate generation. Experiments justify the high precision and coverage of our extracted patterns. We also presented a successful application of verb patterns into context-aware conceptualization."
]
],
"section_name": [
"Introduction",
"Trade-off between Generality and Specificity",
"Problem Model",
"Preliminary Definitions",
"Model",
"Rationality",
"Algorithm",
"Settings",
"Statistics of Verb Patterns",
"Effectiveness",
"Application: Context-Aware Conceptualization",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"13aa70ecdc23f2a8a46c2cd7888da67875a2d4b4",
"45a17f0a66a086664e31af562117106538cae72c"
],
"answer": [
{
"evidence": [
"Given verb phrases, we seek for the best assignment function INLINEFORM0 that minimizes the code length of phrases. Let INLINEFORM1 be the code length derived by INLINEFORM2 . The problem of verb pattern assignment thus can be formalized as below:"
],
"extractive_spans": [
"the code length of phrases."
],
"free_form_answer": "",
"highlighted_evidence": [
"Given verb phrases, we seek for the best assignment function INLINEFORM0 that minimizes the code length of phrases. Let INLINEFORM1 be the code length derived by INLINEFORM2 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Contributions Generality and specificity obviously contradict to each other. How to find a good trade-off between them is the main challenge in this paper. We will use minimum description length (MDL) as the basic framework to reconcile the two objectives. More specifically, our contribution in this paper can be summarized as follows:"
],
"extractive_spans": [],
"free_form_answer": "Minimum description length (MDL) as the basic framework to reconcile the two contradicting objectives: generality and specificity.",
"highlighted_evidence": [
"Generality and specificity obviously contradict to each other. How to find a good trade-off between them is the main challenge in this paper. We will use minimum description length (MDL) as the basic framework to reconcile the two objectives."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"57766c085cc18c89653f5f1497ec981d39f60ab8",
"7022d84f0c7f75c36d30dadb5777b9ea9f1658db"
],
"answer": [
{
"evidence": [
"Verb Phrase Data The pattern assignment uses the phrase distribution INLINEFORM0 . To do this, we use the “English All” dataset in Google Syntactic N-Grams. The dataset contains counted syntactic ngrams extracted from the English portion of the Google Books corpus. It contains 22,230 different verbs (without stemming), and 147,056 verb phrases. For a fixed verb, we compute the probability of phrase INLINEFORM1 by: DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Verb Phrase Data The pattern assignment uses the phrase distribution INLINEFORM0 . To do this, we use the “English All” dataset in Google Syntactic N-Grams. The dataset contains counted syntactic ngrams extracted from the English portion of the Google Books corpus. It contains 22,230 different verbs (without stemming), and 147,056 verb phrases. For a fixed verb, we compute the probability of phrase INLINEFORM1 by: DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Verb Phrase Data The pattern assignment uses the phrase distribution INLINEFORM0 . To do this, we use the “English All” dataset in Google Syntactic N-Grams. The dataset contains counted syntactic ngrams extracted from the English portion of the Google Books corpus. It contains 22,230 different verbs (without stemming), and 147,056 verb phrases. For a fixed verb, we compute the probability of phrase INLINEFORM1 by: DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" The pattern assignment uses the phrase distribution INLINEFORM0 . To do this, we use the “English All” dataset in Google Syntactic N-Grams. The dataset contains counted syntactic ngrams extracted from the English portion of the Google Books corpus. It contains 22,230 different verbs (without stemming), and 147,056 verb phrases. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0d5376496afa726fcde275d9c3d3d608ee345f24",
"4d54379e50619e9be0ad2e9aecfc3401ebad2376"
],
"answer": [
{
"evidence": [
"To evaluate the effectiveness of our pattern summarization approach, we report two metrics: (1) ( INLINEFORM0 ) how much of the verb phrases in natural language our solution can find corresponding patterns (2) ( INLINEFORM1 ) how much of the phrases and their corresponding patterns are correctly matched? We compute the two metrics by: DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "coverage and precision",
"highlighted_evidence": [
"To evaluate the effectiveness of our pattern summarization approach, we report two metrics: (1) ( INLINEFORM0 ) how much of the verb phrases in natural language our solution can find corresponding patterns (2) ( INLINEFORM1 ) how much of the phrases and their corresponding patterns are correctly matched?"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the effectiveness of our pattern summarization approach, we report two metrics: (1) ( INLINEFORM0 ) how much of the verb phrases in natural language our solution can find corresponding patterns (2) ( INLINEFORM1 ) how much of the phrases and their corresponding patterns are correctly matched? We compute the two metrics by: DISPLAYFORM0"
],
"extractive_spans": [
"INLINEFORM0 ",
"INLINEFORM1 "
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the effectiveness of our pattern summarization approach, we report two metrics: (1) ( INLINEFORM0 ) how much of the verb phrases in natural language our solution can find corresponding patterns (2) ( INLINEFORM1 ) how much of the phrases and their corresponding patterns are correctly matched? We compute the two metrics by: DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"what do they mean by description length?",
"do they focus on english verbs?",
"what evaluation metrics are used?"
],
"question_id": [
"2895a3fc63f6f403445c11043460584e949fb16c",
"1e7e3f0f760cd628f698b73d82c0f946707855ca",
"64632981279c7aa16ffc1a44ffc31f4520f5559e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Verb phrases and their patterns",
"Figure 1: Examples of Pattern Assignments",
"Table 2: Some extracted patterns. The number in brackets is the phrase’s frequency in Google Syntactic N-Gram. #phrase means the number of distinct phrases of the verb.",
"Figure 2: Precision",
"Figure 3: Conceptualization Results"
],
"file": [
"2-Table1-1.png",
"2-Figure1-1.png",
"5-Table2-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png"
]
} | [
"what do they mean by description length?"
] | [
[
"1710.07695-Model-1",
"1710.07695-Trade-off between Generality and Specificity-5"
]
] | [
"Minimum description length (MDL) as the basic framework to reconcile the two contradicting objectives: generality and specificity."
] | 301 |
1605.05195 | Enhanced Twitter Sentiment Classification Using Contextual Information | The rise in popularity and ubiquity of Twitter has made sentiment analysis of tweets an important and well-covered area of research. However, the 140 character limit imposed on tweets makes it hard to use standard linguistic methods for sentiment classification. On the other hand, what tweets lack in structure they make up with sheer volume and rich metadata. This metadata includes geolocation, temporal and author information. We hypothesize that sentiment is dependent on all these contextual factors. Different locations, times and authors have different emotional valences. In this paper, we explored this hypothesis by utilizing distant supervision to collect millions of labelled tweets from different locations, times and authors. We used this data to analyse the variation of tweet sentiments across different authors, times and locations. Once we explored and understood the relationship between these variables and sentiment, we used a Bayesian approach to combine these variables with more standard linguistic features such as n-grams to create a Twitter sentiment classifier. This combined classifier outperforms the purely linguistic classifier, showing that integrating the rich contextual information available on Twitter into sentiment classification is a promising direction of research. | {
"paragraphs": [
[
"Twitter is a micro-blogging platform and a social network where users can publish and exchange short messages of up to 140 characters long (also known as tweets). Twitter has seen a great rise in popularity in recent years because of its availability and ease-of-use. This rise in popularity and the public nature of Twitter (less than 10% of Twitter accounts are private BIBREF0 ) have made it an important tool for studying the behaviour and attitude of people.",
"One area of research that has attracted great attention in the last few years is that of tweet sentiment classification. Through sentiment classification and analysis, one can get a picture of people's attitudes about particular topics on Twitter. This can be used for measuring people's attitudes towards brands, political candidates, and social issues.",
"There have been several works that do sentiment classification on Twitter using standard sentiment classification techniques, with variations of n-gram and bag of words being the most common. There have been attempts at using more advanced syntactic features as is done in sentiment classification for other domains BIBREF1 , BIBREF2 , however the 140 character limit imposed on tweets makes this hard to do as each article in the Twitter training set consists of sentences of no more than several words, many of them with irregular form BIBREF3 .",
"On the other hand, what tweets lack in structure they make up with sheer volume and rich metadata. This metadata includes geolocation, temporal and author information. We hypothesize that sentiment is dependent on all these contextual factors. Different locations, times and authors have different emotional valences. For instance, people are generally happier on weekends and certain hours of the day, more depressed at the end of summer holidays, and happier in certain states in the United States. Moreover, people have different baseline emotional valences from one another. These claims are supported for example by the annual Gallup poll that ranks states from most happy to least happy BIBREF4 , or the work by Csikszentmihalyi and Hunter BIBREF5 that showed reported happiness varies significantly by day of week and time of day. We believe these factors manifest themselves in sentiments expressed in tweets and that by accounting for these factors, we can improve sentiment classification on Twitter.",
"In this work, we explored this hypothesis by utilizing distant supervision BIBREF6 to collect millions of labelled tweets from different locations (within the USA), times of day, days of the week, months and authors. We used this data to analyse the variation of tweet sentiments across the aforementioned categories. We then used a Bayesian approach to incorporate the relationship between these factors and tweet sentiments into standard n-gram based Twitter sentiment classification.",
"This paper is structured as follows. In the next sections we will review related work on sentiment classification, followed by a detailed explanation of our approach and our data collection, annotation and processing efforts. After that, we describe our baseline n-gram sentiment classifier model, followed by the explanation of how the baseline model is extended to incorporate contextual information. Next, we describe our analysis of the variation of sentiment within each of the contextual categories. We then evaluate our models and finally summarize our findings and contributions and discuss possible paths for future work."
],
[
"Sentiment analysis and classification of text is a problem that has been well studied across many different domains, such as blogs, movie reviews, and product reviews (e.g., BIBREF7 , BIBREF8 , BIBREF9 ). There is also extensive work on sentiment analysis for Twitter. Most of the work on Twitter sentiment classification either focuses on different machine learning techniques (e.g., BIBREF10 , BIBREF11 ), novel features (e.g., BIBREF12 , BIBREF13 , BIBREF3 ), new data collection and labelling techniques (e.g., BIBREF6 ) or the application of sentiment classification to analyse the attitude of people about certain topics on Twitter (e.g., BIBREF14 , BIBREF15 ). These are just some examples of the extensive research already done on Twitter sentiment classification and analysis.",
"There has also been previous work on measuring the happiness of people in different contexts (location, time, etc). This has been done mostly through traditional land-line polling BIBREF5 , BIBREF4 , with Gallup's annual happiness index being a prime example BIBREF4 . More recently, some have utilized Twitter to measure people's mood and happiness and have found Twitter to be a generally good measure of the public's overall happiness, well-being and mood. For example, Bollen et al. BIBREF15 used Twitter to measure the daily mood of the public and compare that to the record of social, political, cultural and economic events in the real world. They found that these events have a significant effect on the public mood as measured through Twitter. Another example would be the work of Mitchell et al. BIBREF16 , in which they estimated the happiness levels of different states and cities in the USA using Twitter and found statistically significant correlations between happiness level and the demographic characteristics (such as obesity rates and education levels) of those regions. Finally, improving natural language processing by incorporating contextual information has been successfully attempted before BIBREF17 , BIBREF18 ; but as far as we are aware, this has not been attempted for sentiment classification.",
"In this work, we combined the sentiment analysis of different authors, locations, times and dates as measured through labelled Twitter data with standard word-based sentiment classification methods to create a context-dependent sentiment classifier. As far as we can tell, there has not been significant previous work on Twitter sentiment classification that has achieved this."
],
[
"The main hypothesis behind this work is that the average sentiment of messages on Twitter is different in different contexts. Specifically, tweets in different spatial, temporal and authorial contexts have on average different sentiments. Basically, these factors (many of which are environmental) have an affect on the emotional states of people which in turn have an effect on the sentiments people express on Twitter and elsewhere. In this paper, we used this contextual information to better predict the sentiment of tweets.",
"Luckily, tweets are tagged with very rich metadata, including location, timestamp, and author information. By analysing labelled data collected from these different contexts, we calculated prior probabilities of negative and positive sentiments for each of the contextual categories shown below:",
"This means that for every item in each of these categories, we calculated a probability of sentiment being positive or negative based on historical tweets. For example, if seven out of ten historical tweets made on Friday were positive then the prior probability of a sentiment being positive for tweets sent out on Friday is INLINEFORM0 and the prior probability of a sentiment being negative is INLINEFORM1 . We then trained a Bayesian sentiment classifier using a combination of these prior probabilities and standard n-gram models. The model is described in great detail in the \"Baseline Model\" and \"Contextual Model\" sections of this paper.",
"In order to do a comprehensive analysis of sentiment of tweets across aforementioned contextual categories, a large amount of labelled data was required. We needed thousands of tweets for every item in each of the categories (e.g. thousands of tweets per hour of day, or state in the US). Therefore, creating a corpus using human-annotated data would have been impractical. Instead, we turned to distant supervision techniques to obtain our corpus. Distant supervision allows us to have noisy but large amounts of annotated tweets.",
"There are different methods of obtaining labelled data using distant supervision BIBREF1 , BIBREF6 , BIBREF19 , BIBREF12 . We used emoticons to label tweets as positive or negative, an approach that was introduced by Read BIBREF1 and used in multiple works BIBREF6 , BIBREF12 . We collected millions of English-language tweets from different times, dates, authors and US states. We used a total of six emoticons, three mapping to positive and three mapping to negative sentiment (table TABREF7 ). We identified more than 120 positive and negative ASCII emoticons and unicode emojis, but we decided to only use the six most common emoticons in order to avoid possible selection biases. For example, people who use obscure emoticons and emojis might have a different base sentiment from those who do not. Using the six most commonly used emoticons limits this bias. Since there are no \"neutral\" emoticons, our dataset is limited to tweets with positive or negative sentiments. Accordingly, in this work we are only concerned with analysing and classifying the polarity of tweets (negative vs. positive) and not their subjectivity (neutral vs. non-neutral). Below we will explain our data collection and corpus in greater detail."
],
[
"We collected two datasets, one massive and labelled through distant supervision, the other small and labelled by humans. The massive dataset was used to calculate the prior probabilities for each of our contextual categories. Both datasets were used to train and test our sentiment classifier. The human-labelled dataset was used as a sanity check to make sure the dataset labelled using the emoticons classifier was not too noisy and that the human and emoticon labels matched for a majority of tweets."
],
[
"We collected a total of 18 million, geo-tagged, English-language tweets over three years, from January 1st, 2012 to January 1st, 2015, evenly divided across all 36 months, using Historical PowerTrack for Twitter provided by GNIP. We created geolocation bounding boxes for each of the 50 states which were used to collect our dataset. All 18 million tweets originated from one of the 50 states and are tagged as such. Moreover, all tweets contained one of the six emoticons in Table TABREF7 and were labelled as either positive or negative based on the emoticon. Out of the 18 million tweets, INLINEFORM0 million ( INLINEFORM1 ) were labelled as positive and INLINEFORM2 million ( INLINEFORM3 ) were labelled as negative. The 18 million tweets came from INLINEFORM4 distinct users."
],
[
"We randomly selected 3000 tweets from our large dataset and had all their emoticons stripped. We then had these tweets labelled as positive or negative by three human annotators. We measured the inter-annotator agreement using Fleiss' kappa, which calculates the degree of agreement in classification over that which would be expected by chance BIBREF20 . The kappa score for the three annotators was INLINEFORM0 , which means that there were disagreements in sentiment for a small portion of the tweets. However, the number of tweets that were labelled the same by at least two of the three human annotator was 2908 out of of the 3000 tweets ( INLINEFORM1 ). Of these 2908 tweets, INLINEFORM2 were labelled as positive and INLINEFORM3 as negative.",
"We then measured the agreement between the human labels and emoticon-based labels, using only tweets that were labelled the same by at least two of the three human annotators (the majority label was used as the label for the tweet). Table TABREF13 shows the confusion matrix between human and emoticon-based annotations. As you can see, INLINEFORM0 of all labels matched ( INLINEFORM1 ).",
"These results are very promising and show that using emoticon-based distant supervision to label the sentiment of tweets is an acceptable method. Though there is some noise introduced to the dataset (as evidenced by the INLINEFORM0 of tweets whose human labels did not match their emoticon labels), the sheer volume of labelled data that this method makes accessible, far outweighs the relatively small amount of noise introduced."
],
[
"Since the data is labelled using emoticons, we stripped all emoticons from the training data. This ensures that emoticons are not used as a feature in our sentiment classifier. A large portion of tweets contain links to other websites. These links are mostly not meaningful semantically and thus can not help in sentiment classification. Therefore, all links in tweets were replaced with the token \"URL\". Similarly, all mentions of usernames (which are denoted by the @ symbol) were replaced with the token \"USERNAME\", since they also can not help in sentiment classification. Tweets also contain very informal language and as such, characters in words are often repeated for emphasis (e.g., the word good is used with an arbitrary number of o's in many tweets). Any character that was repeated more than two times was removed (e.g., goooood was replaced with good). Finally, all words in the tweets were stemmed using Porter Stemming BIBREF21 ."
],
[
"For our baseline sentiment classification model, we used our massive dataset to train a negative and positive n-gram language model from the negative and positive tweets.",
"As our baseline model, we built purely linguistic bigram models in Python, utilizing some components from NLTK BIBREF22 . These models used a vocabulary that was filtered to remove words occurring 5 or fewer times. Probability distributions were calculated using Kneser-Ney smoothing BIBREF23 . In addition to Kneser-Ney smoothing, the bigram models also used “backoff” smoothing BIBREF24 , in which an n-gram model falls back on an INLINEFORM0 -gram model for words that were unobserved in the n-gram context.",
"In order to classify the sentiment of a new tweet, its probability of fit is calculated using both the negative and positive bigram models. Equation EQREF15 below shows our models through a Bayesian lens. DISPLAYFORM0 ",
"Here INLINEFORM0 can be INLINEFORM1 or INLINEFORM2 , corresponding to the hypothesis that the sentiment of the tweet is positive or negative respectively. INLINEFORM3 is the sequence of INLINEFORM4 words, written as INLINEFORM5 , that make up the tweet. INLINEFORM6 is not dependent on the hypothesis, and can thus be ignored. Since we are using a bigram model, Equation EQREF15 can be written as: DISPLAYFORM0 ",
"This is our purely linguistic baseline model."
],
[
"The Bayesian approach allows us to easily integrate the contextual information into our models. INLINEFORM0 in Equation EQREF16 is the prior probability of a tweet having the sentiment INLINEFORM1 . The prior probability ( INLINEFORM2 ) can be calculated using the contextual information of the tweets. Therefore, INLINEFORM3 in equation EQREF16 is replaced by INLINEFORM4 , which is the probability of the hypothesis given the contextual information. INLINEFORM5 is the posterior probability of the following Bayesian equation: DISPLAYFORM0 ",
"Where INLINEFORM0 is the set of contextual variables: INLINEFORM1 . INLINEFORM2 captures the probability that a tweet is positive or negative, given the state, hour of day, day of the week, month and author of the tweet. Here INLINEFORM3 is not dependent on the hypothesis, and thus can be ignored. Equation EQREF16 can therefore be rewritten to include the contextual information: DISPLAYFORM0 ",
"Equation EQREF18 is our extended Bayesian model for integrating contextual information with more standard, word-based sentiment classification."
],
[
"We considered five contextual categories: one spatial, three temporal and one authorial. Here is the list of the five categories:",
"We used our massive emoticon labelled dataset to calculate the average sentiment for all of these five categories. A tweet was given a score of INLINEFORM0 if it was labelled as negative and a score 1 if it was labelled as positive, so an average sentiment of 0 for a contextual category would mean that tweets in that category were evenly labelled as positive and negative."
],
[
"All of the 18 million tweets in our dataset originate from the USA and are geo-tagged. Naturally, the tweets are not evenly distributed across the 50 states given the large variation between the population of each state. Figure FIGREF25 shows the percentage of tweets per state, sorted from smallest to largest. Not surprisingly, California has the highest number of tweets ( INLINEFORM0 ), and Wyoming has the lowest number of tweets ( INLINEFORM1 ).",
"Even the state with the lowest percentage of tweets has more than ten thousand tweets, which is enough to calculate a statistically significant average sentiment for that state. The sentiment for all states averaged across the tweets from the three years is shown in Figure FIGREF26 . Note that an average sentiment of INLINEFORM0 means that all tweets were labelled as positive, INLINEFORM1 means that all tweets were labelled as negative and INLINEFORM2 means that there was an even distribution of positive and negative tweets. The average sentiment of all the states leans more towards the positive side. This is expected given that INLINEFORM3 of the tweets in our dataset were labelled as positive.",
"It is interesting to note that even with the noisy dataset, our ranking of US states based on their Twitter sentiment correlates with the ranking of US states based on the well-being index calculated by Oswald and Wu BIBREF25 in their work on measuring well-being and life satisfaction across America. Their data is from the behavioral risk factor survey score (BRFSS), which is a survey of life satisfaction across the United States from INLINEFORM0 million citizens. Figure FIGREF27 shows this correlation ( INLINEFORM1 , INLINEFORM2 )."
],
[
"We looked at three temporal variables: time of day, day of the week and month. All tweets are tagged with timestamp data, which we used to extract these three variables. Since all timestamps in the Twitter historical archives (and public API) are in the UTC time zone, we first converted the timestamp to the local time of the location where the tweet was sent from. We then calculated the sentiment for each day of week (figure FIGREF29 ), hour (figure FIGREF30 ) and month (figure FIGREF31 ), averaged across all 18 million tweets over three years. The 18 million tweets were divided evenly between each month, with INLINEFORM0 million tweets per month. The tweets were also more or less evenly divided between each day of week, with each day having somewhere between INLINEFORM1 and INLINEFORM2 of the tweets. Similarly, the tweets were almost evenly divided between each hour, with each having somewhere between INLINEFORM3 and INLINEFORM4 of the tweets.",
"Some of these results make intuitive sense. For example, the closer the day of week is to Friday and Saturday, the more positive the sentiment, with a drop on Sunday. As with spatial, the average sentiment of all the hours, days and months lean more towards the positive side."
],
[
"The last contextual variable we looked at was authorial. People have different baseline attitudes, some are optimistic and positive, some are pessimistic and negative, and some are in between. This difference in personalities can manifest itself in the sentiment of tweets. We attempted to capture this difference by looking at the history of tweets made by users. The 18 million labelled tweets in our dataset come from INLINEFORM0 authors.",
"In order to calculate a statistically significant average sentiment for each author, we need our sample size to not be too small. However, a large number of the users in our dataset only tweeted once or twice during the three years. Figure FIGREF33 shows the number of users in bins of 50 tweets. (So the first bin corresponds to the number of users that have less than 50 tweets throughout the three year.) The number of users in the first few bins were so large that the graph needed to be logarithmic in order to be legible. We decided to calculate the prior sentiment for users with at least 50 tweets. This corresponded to less than INLINEFORM0 of the users ( INLINEFORM1 out of INLINEFORM2 total users). Note that these users are the most prolific authors in our dataset, as they account for INLINEFORM3 of all tweets in our dataset. The users with less than 50 posts had their prior set to INLINEFORM4 , not favouring positive or negative sentiment (this way it does not have an impact on the Bayesian model, allowing other contextual variables to set the prior).",
"As it is not feasible to show the prior average sentiment of all INLINEFORM0 users, we created 20 even sentiment bins, from INLINEFORM1 to INLINEFORM2 . We then plotted the number of users whose average sentiment falls into these bins (Figure FIGREF34 ). Similar to other variables, the positive end of the graph is much heavier than the negative end."
],
[
"We used 5-fold cross validation to train and evaluate our baseline and contextual models, ensuring that the tweets in the training folds were not used in the calculation of any of the priors or in the training of the bigram models. Table TABREF35 shows the accuracy of our models. The contextual model outperformed the baseline model using any of the contextual variables by themselves, with state being the best performing and day of week the worst. The model that utilized all the contextual variables saw a INLINEFORM0 relative and INLINEFORM1 absolute improvement over the baseline bigram model.",
"Because of the great increase in the volume of data, distant supervised sentiment classifiers for Twitter tend to generally outperform more standard classifiers using human-labelled datasets. Therefore, it makes sense to compare the performance of our classifier to other distant supervised classifiers. Though not directly comparable, our contextual classifier outperforms the distant supervised Twitter sentiment classifier by Go et al BIBREF6 by more than INLINEFORM0 (absolute).",
"Table TABREF36 shows the precision, recall and F1 score of the positive and negative class for the full contextual classifier (Contextual-All)."
],
[
"Even though our contextual classifier was able to outperform the previous state-of-the-art, distant supervised sentiment classifier, it should be noted that our contextual classifier's performance is boosted significantly by spatial information extracted through geo-tags. However, only about one to two percent of tweets in the wild are geo-tagged. Therefore, we trained and evaluated our contextual model using all the variables except for state. The accuracy of this model was INLINEFORM0 , which is still significantly better than the performance of the purely linguistic classifier. Fortunately, all tweets are tagged with timestamps and author information, so all the other four contextual variables used in our model can be used for classifying the sentiment of any tweet.",
"Note that the prior probabilities that we calculated need to be recalculated and updated every once in a while to account for changes in the world. For example, a state might become more affluent, causing its citizens to become on average happier. This change could potentially have an effect on the average sentiment expressed by the citizens of that state on Twitter, which would make our priors obsolete."
],
[
"Sentiment classification of tweets is an important area of research. Through classification and analysis of sentiments on Twitter, one can get an understanding of people's attitudes about particular topics. In this work, we utilized the power of distant supervision to collect millions of noisy labelled tweets from all over the USA, across three years. We used this dataset to create prior probabilities for the average sentiment of tweets in different spatial, temporal and authorial contexts. We then used a Bayesian approach to combine these priors with standard bigram language models. The resulting combined model was able to achieve an accuracy of INLINEFORM0 , outperforming the previous state-of-the-art distant supervised Twitter sentiment classifier by more than INLINEFORM1 .",
"In the future, we would like to explore additional contextual features that could be predictive of sentiment on Twitter. Specifically, we would like to incorporate the topic type of tweets into our model. The topic type characterizes the nature of the topics discussed in tweets (e.g., breaking news, sports, etc). There has already been extensive work done on topic categorization schemes for Twitter BIBREF26 , BIBREF27 , BIBREF28 which we can utilize for this task."
],
[
"We would like to thank all the annotators for their efforts. We would also like to thank Brandon Roy for sharing his insights on Bayesian modelling. This work was supported by a generous grant from Twitter."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Data Collection and Datasets",
"Emoticon-based Labelled Dataset",
"Human Labelled Dataset",
"Data Preparation",
"Baseline Model",
"Contextual Model",
"Sentiment in Context",
"Spatial",
"Temporal",
"Authorial",
"Results",
"Discussions",
"Conclusions and Future Work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"1537a9d04db7cad4275294e867ec25a3593e5046",
"a81c77cfadc41d55e13fdefa870bb2b83f60de10"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"It is interesting to note that even with the noisy dataset, our ranking of US states based on their Twitter sentiment correlates with the ranking of US states based on the well-being index calculated by Oswald and Wu BIBREF25 in their work on measuring well-being and life satisfaction across America. Their data is from the behavioral risk factor survey score (BRFSS), which is a survey of life satisfaction across the United States from INLINEFORM0 million citizens. Figure FIGREF27 shows this correlation ( INLINEFORM1 , INLINEFORM2 )."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"It is interesting to note that even with the noisy dataset, our ranking of US states based on their Twitter sentiment correlates with the ranking of US states based on the well-being index calculated by Oswald and Wu BIBREF25 in their work on measuring well-being and life satisfaction across America. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"58f1eb956f433ca357fd87513cdb8e999bd437b4",
"a02224e2190e99435deff05b7afe7c9d397f161f"
],
"answer": [
{
"evidence": [
"There are different methods of obtaining labelled data using distant supervision BIBREF1 , BIBREF6 , BIBREF19 , BIBREF12 . We used emoticons to label tweets as positive or negative, an approach that was introduced by Read BIBREF1 and used in multiple works BIBREF6 , BIBREF12 . We collected millions of English-language tweets from different times, dates, authors and US states. We used a total of six emoticons, three mapping to positive and three mapping to negative sentiment (table TABREF7 ). We identified more than 120 positive and negative ASCII emoticons and unicode emojis, but we decided to only use the six most common emoticons in order to avoid possible selection biases. For example, people who use obscure emoticons and emojis might have a different base sentiment from those who do not. Using the six most commonly used emoticons limits this bias. Since there are no \"neutral\" emoticons, our dataset is limited to tweets with positive or negative sentiments. Accordingly, in this work we are only concerned with analysing and classifying the polarity of tweets (negative vs. positive) and not their subjectivity (neutral vs. non-neutral). Below we will explain our data collection and corpus in greater detail."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We collected millions of English-language tweets from different times, dates, authors and US states. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"There are different methods of obtaining labelled data using distant supervision BIBREF1 , BIBREF6 , BIBREF19 , BIBREF12 . We used emoticons to label tweets as positive or negative, an approach that was introduced by Read BIBREF1 and used in multiple works BIBREF6 , BIBREF12 . We collected millions of English-language tweets from different times, dates, authors and US states. We used a total of six emoticons, three mapping to positive and three mapping to negative sentiment (table TABREF7 ). We identified more than 120 positive and negative ASCII emoticons and unicode emojis, but we decided to only use the six most common emoticons in order to avoid possible selection biases. For example, people who use obscure emoticons and emojis might have a different base sentiment from those who do not. Using the six most commonly used emoticons limits this bias. Since there are no \"neutral\" emoticons, our dataset is limited to tweets with positive or negative sentiments. Accordingly, in this work we are only concerned with analysing and classifying the polarity of tweets (negative vs. positive) and not their subjectivity (neutral vs. non-neutral). Below we will explain our data collection and corpus in greater detail."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We collected millions of English-language tweets from different times, dates, authors and US states."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"869a9e6c05b29cce893c78e6cd39c74ba7348c5c",
"899c532eeb81c3f95883b1ea056e4ab4ea2a3a8d"
],
"answer": [
{
"evidence": [
"This paper is structured as follows. In the next sections we will review related work on sentiment classification, followed by a detailed explanation of our approach and our data collection, annotation and processing efforts. After that, we describe our baseline n-gram sentiment classifier model, followed by the explanation of how the baseline model is extended to incorporate contextual information. Next, we describe our analysis of the variation of sentiment within each of the contextual categories. We then evaluate our models and finally summarize our findings and contributions and discuss possible paths for future work.",
"There have been several works that do sentiment classification on Twitter using standard sentiment classification techniques, with variations of n-gram and bag of words being the most common. There have been attempts at using more advanced syntactic features as is done in sentiment classification for other domains BIBREF1 , BIBREF2 , however the 140 character limit imposed on tweets makes this hard to do as each article in the Twitter training set consists of sentences of no more than several words, many of them with irregular form BIBREF3 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"After that, we describe our baseline n-gram sentiment classifier model, followed by the explanation of how the baseline model is extended to incorporate contextual information.",
"There have been attempts at using more advanced syntactic features as is done in sentiment classification for other domains BIBREF1 , BIBREF2 , however the 140 character limit imposed on tweets makes this hard to do as each article in the Twitter training set consists of sentences of no more than several words, many of them with irregular form BIBREF3 ."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"As our baseline model, we built purely linguistic bigram models in Python, utilizing some components from NLTK BIBREF22 . These models used a vocabulary that was filtered to remove words occurring 5 or fewer times. Probability distributions were calculated using Kneser-Ney smoothing BIBREF23 . In addition to Kneser-Ney smoothing, the bigram models also used “backoff” smoothing BIBREF24 , in which an n-gram model falls back on an INLINEFORM0 -gram model for words that were unobserved in the n-gram context."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As our baseline model, we built purely linguistic bigram models in Python, utilizing some components from NLTK BIBREF22 . "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7cd6d76b458a0ea9b07a2dca6eb0cbe04d855fbf",
"81b399818d7d781e0b733f5d5e1659944147c4c4"
],
"answer": [
{
"evidence": [
"On the other hand, what tweets lack in structure they make up with sheer volume and rich metadata. This metadata includes geolocation, temporal and author information. We hypothesize that sentiment is dependent on all these contextual factors. Different locations, times and authors have different emotional valences. For instance, people are generally happier on weekends and certain hours of the day, more depressed at the end of summer holidays, and happier in certain states in the United States. Moreover, people have different baseline emotional valences from one another. These claims are supported for example by the annual Gallup poll that ranks states from most happy to least happy BIBREF4 , or the work by Csikszentmihalyi and Hunter BIBREF5 that showed reported happiness varies significantly by day of week and time of day. We believe these factors manifest themselves in sentiments expressed in tweets and that by accounting for these factors, we can improve sentiment classification on Twitter."
],
"extractive_spans": [
"people have different baseline emotional valences from one another"
],
"free_form_answer": "",
"highlighted_evidence": [
"Different locations, times and authors have different emotional valences. For instance, people are generally happier on weekends and certain hours of the day, more depressed at the end of summer holidays, and happier in certain states in the United States. Moreover, people have different baseline emotional valences from one another."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 8: Number of users (with at least 50 tweets) per sentiment bins of 0.05, averaged across three years, from 2012 to 2014."
],
"extractive_spans": [],
"free_form_answer": "Among those who wrote more than 50 tweets, 16% of the authors have average sentiment within [0.95, 1.00], while only 1.5% of the authors have average sentiment within [-1.00, -0.95]\n",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 8: Number of users (with at least 50 tweets) per sentiment bins of 0.05, averaged across three years, from 2012 to 2014."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"451ba5a2ee7fcfdbddd5740eb7c35b68f451237b",
"7cc7960ee63d807d94d744a18b7bd425dfde6e75"
],
"answer": [
{
"evidence": [
"On the other hand, what tweets lack in structure they make up with sheer volume and rich metadata. This metadata includes geolocation, temporal and author information. We hypothesize that sentiment is dependent on all these contextual factors. Different locations, times and authors have different emotional valences. For instance, people are generally happier on weekends and certain hours of the day, more depressed at the end of summer holidays, and happier in certain states in the United States. Moreover, people have different baseline emotional valences from one another. These claims are supported for example by the annual Gallup poll that ranks states from most happy to least happy BIBREF4 , or the work by Csikszentmihalyi and Hunter BIBREF5 that showed reported happiness varies significantly by day of week and time of day. We believe these factors manifest themselves in sentiments expressed in tweets and that by accounting for these factors, we can improve sentiment classification on Twitter."
],
"extractive_spans": [
"people are generally happier on weekends and certain hours of the day, more depressed at the end of summer holidays"
],
"free_form_answer": "",
"highlighted_evidence": [
"Different locations, times and authors have different emotional valences. For instance, people are generally happier on weekends and certain hours of the day, more depressed at the end of summer holidays, and happier in certain states in the United States."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 4: Average sentiment of different days of the week in the USA, averaged across three years, from 2012 to 2014.",
"FLOAT SELECTED: Figure 5: Average sentiment of different hours of the day in the USA, averaged across three years, from 2012 to 2014.",
"FLOAT SELECTED: Figure 6: Average sentiment of different months in the USA, averaged across three years, from 2012 to 2014."
],
"extractive_spans": [],
"free_form_answer": "The closer the day of the week to Friday and Saturday, the more positive the sentiment; tweets made between 10 a.m. 12 noon are most positive, while those made around 3 a.m. and 20 p.m. are least positive; tweets made in April and May are most positive, while those made in August and September are least positive.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 4: Average sentiment of different days of the week in the USA, averaged across three years, from 2012 to 2014.",
"FLOAT SELECTED: Figure 5: Average sentiment of different hours of the day in the USA, averaged across three years, from 2012 to 2014.",
"FLOAT SELECTED: Figure 6: Average sentiment of different months in the USA, averaged across three years, from 2012 to 2014."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0d77888d7090462110cae619a28c5b535b7cd145",
"c92a611cddb51abd4954533835ba513f8a1fbd86"
],
"answer": [
{
"evidence": [
"On the other hand, what tweets lack in structure they make up with sheer volume and rich metadata. This metadata includes geolocation, temporal and author information. We hypothesize that sentiment is dependent on all these contextual factors. Different locations, times and authors have different emotional valences. For instance, people are generally happier on weekends and certain hours of the day, more depressed at the end of summer holidays, and happier in certain states in the United States. Moreover, people have different baseline emotional valences from one another. These claims are supported for example by the annual Gallup poll that ranks states from most happy to least happy BIBREF4 , or the work by Csikszentmihalyi and Hunter BIBREF5 that showed reported happiness varies significantly by day of week and time of day. We believe these factors manifest themselves in sentiments expressed in tweets and that by accounting for these factors, we can improve sentiment classification on Twitter."
],
"extractive_spans": [
"happier in certain states in the United States"
],
"free_form_answer": "",
"highlighted_evidence": [
"Different locations, times and authors have different emotional valences. For instance, people are generally happier on weekends and certain hours of the day, more depressed at the end of summer holidays, and happier in certain states in the United States."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"It is interesting to note that even with the noisy dataset, our ranking of US states based on their Twitter sentiment correlates with the ranking of US states based on the well-being index calculated by Oswald and Wu BIBREF25 in their work on measuring well-being and life satisfaction across America. Their data is from the behavioral risk factor survey score (BRFSS), which is a survey of life satisfaction across the United States from INLINEFORM0 million citizens. Figure FIGREF27 shows this correlation ( INLINEFORM1 , INLINEFORM2 )."
],
"extractive_spans": [
"ranking of US states based on their Twitter sentiment correlates with the ranking of US states based on the well-being index"
],
"free_form_answer": "",
"highlighted_evidence": [
"It is interesting to note that even with the noisy dataset, our ranking of US states based on their Twitter sentiment correlates with the ranking of US states based on the well-being index calculated by Oswald and Wu BIBREF25 in their work on measuring well-being and life satisfaction across America. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do the authors mention any possible confounds in this study?",
"Do they report results only on English data?",
"Are there any other standard linguistic features used, other than ngrams?",
"What is the relationship between author and emotional valence?",
"What is the relationship between time and emotional valence?",
"What is the relationship between location and emotional valence?"
],
"question_id": [
"deed225dfa94120fafcc522d4bfd9ea57085ef8d",
"3df6d18d7b25d1c814e9dcc8ba78b3cdfe15edcd",
"9aabcba3d44ee7d0bbf6a2c019ab9e0f02fab244",
"242c626e89bca648b65af135caaa7ceae74e9720",
"bba677d1a1fe38a41f61274648b386bdb44f1851",
"b6c2a391c4a94eaa768150f151040bb67872c0bf"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: List of emoticons.",
"Figure 1: Percentage of tweets per state in the USA, sorted from lowest to highest.",
"Figure 2: Average sentiment of states in the USA, averaged across three years, from 2012 to 2014.",
"Figure 3: Ranking of US states based on Twitter sentiment vs. ranking of states based on their wellbeing index. r = 0.44, p < 0.005.",
"Figure 4: Average sentiment of different days of the week in the USA, averaged across three years, from 2012 to 2014.",
"Figure 5: Average sentiment of different hours of the day in the USA, averaged across three years, from 2012 to 2014.",
"Figure 6: Average sentiment of different months in the USA, averaged across three years, from 2012 to 2014.",
"Figure 7: Number of users (logarithmic) in bins of 50 tweets. The first bin corresponds to number of users that have less than 50 tweets throughout the three years and so on.",
"Figure 8: Number of users (with at least 50 tweets) per sentiment bins of 0.05, averaged across three years, from 2012 to 2014.",
"Table 3: Classifier accuracy, sorted from worst to best.",
"Table 4: Precision, recall and F1 score of the full contextual classifier (Contexual-All)."
],
"file": [
"4-Table1-1.png",
"6-Figure1-1.png",
"7-Figure2-1.png",
"7-Figure3-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png",
"8-Figure6-1.png",
"8-Figure7-1.png",
"8-Figure8-1.png",
"9-Table3-1.png",
"9-Table4-1.png"
]
} | [
"What is the relationship between author and emotional valence?",
"What is the relationship between time and emotional valence?"
] | [
[
"1605.05195-8-Figure8-1.png",
"1605.05195-Introduction-3"
],
[
"1605.05195-7-Figure4-1.png",
"1605.05195-8-Figure5-1.png",
"1605.05195-8-Figure6-1.png",
"1605.05195-Introduction-3"
]
] | [
"Among those who wrote more than 50 tweets, 16% of the authors have average sentiment within [0.95, 1.00], while only 1.5% of the authors have average sentiment within [-1.00, -0.95]\n",
"The closer the day of the week to Friday and Saturday, the more positive the sentiment; tweets made between 10 a.m. 12 noon are most positive, while those made around 3 a.m. and 20 p.m. are least positive; tweets made in April and May are most positive, while those made in August and September are least positive."
] | 302 |
1604.05559 | Efficient Calculation of Bigram Frequencies in a Corpus of Short Texts | We show that an efficient and popular method for calculating bigram frequencies is unsuitable for bodies of short texts and offer a simple alternative. Our method has the same computational complexity as the old method and offers an exact count instead of an approximation. | {
"paragraphs": [
[
"This short note is the result of a brief conversation between the authors and Joel Nothman. We came across a potential problem, he gave a sketch of a fix, and we worked out the details of a solution."
],
[
"A common task in natural language processing is to find the most frequently occurring word pairs in a text(s) in the expectation that these pairs will shed some light on the main ideas of the text, or offer insight into the structure of the language. One might be interested in pairings of adjacent words, but in some cases one is also interested in pairs of words in some small neighborhood. The neighborhood is usually refered to as a window, and to illustrate the concept consider the following text and bigram set:",
"Text: “I like kitties and doggies”",
"Window: 2",
"Bigrams: {(I like), (like kitties), (kitties and), (and doggies)} and this one:",
"Text: “I like kitties and doggies”",
"Window: 4",
"Bigrams: {(I like), (I kitties), (I and), (like kitties), (like and), (like doggies), (kitties and), (kitties doggies), (and doggies)}."
],
[
"Bigram frequencies are often calculated using the approximation ",
"$$freq(*, word) = freq(word, *) = freq(word)$$ (Eq. 1) ",
"In a much cited paper, Church and Hanks BIBREF0 use ` $=$ ' in place of ` $\\approx $ ' because the approximation is so good. Indeed, this approximation will only cause errors for the very few words which occur near the beginning or the end of the text. Take for example the text appearing above - the bigram (doggies, *) does not occur once, but the approximation says it does.",
"An efficient method for computing the contingency matrix for a bigram (word1, word2) is suggested by the approximation. Store $freq(w1, w2)$ for all bigrams $(w1, w2)$ and the frequencies of all words. Then,",
"The statistical importance of miscalculations due to this method diminishes as our text grows larger and larger. Interest is growing in the analysis of small texts, however, and a means of computing bigrams for this type of corpus must be employed. This approximation is implemented in popular NLP libraries and can be seen in many tutorials across the internet. People who use this code, or write their own software, must know when it is appropriate."
],
[
"We propose an alternative. As before, store the frequencies of words and the frequencies of bigrams, but this time store two additional maps called too_far_left and too_far_right, of the form {word : list of offending indices of word}. The offending indices are those that are either too far to the left or too far to the right for approximation ( 1 ) to hold. All four of these structures are built during the construction of a bigram finder, and do not cripple performance when computing statistical measures since maps are queried in $O(1)$ time.",
"As an example of the contents of the new maps, in “Dogs are better than cats\", too_far_left[`dog'] = [0] for all windows. In “eight mice eat eight cheese sticks” with window 5, too_far_left[`eight'] = [0,3]. For ease of computation the indices stored in too_far_right are transformed before storage using: ",
"$$\\widehat{idx} = length - idx - 1 = g(idx)$$ (Eq. 6) ",
"where $length$ is the length of the small piece of text being analyzed. Then, too_far_right[`cats'] = [ $g(4)= idx$ ] = [ $0 = \\widehat{idx}$ ].",
"Now, to compute the exact number of occurrences of a bigram we do the computation: ",
"$$freq(*, word) = (w-1)*wordfd[word] - \\sum \\limits _{i=1}^{N}(w-tfl[word][i] - 1)$$ (Eq. 7) ",
"where $w$ is the window size being searched for bigrams, $wfd$ is a frequency distribution of all words in the corpus, $tfl$ is the map too_far_left and $N$ is the number of occurrences of the $word$ in a position too far left.The computation of $freq(word, *)$ can now be performed in the same way by simply substituting $tfl$ with $tfr$ thanks to transformation $g$ , which reverses the indexing. "
]
],
"section_name": [
"Acknowledgements",
"Calculating Bigram Frequecies",
"The Popular Approximation",
"An Alternative Method"
]
} | {
"answers": [
{
"annotation_id": [
"0d9ae4e7b916dfdc0c71437bbfb6fd503de839d3",
"d56ef77f5aeec1b1852376e99e0b8d04c556a5f9"
],
"answer": [
{
"evidence": [
"Text: “I like kitties and doggies”",
"Window: 2",
"Bigrams: {(I like), (like kitties), (kitties and), (and doggies)} and this one:",
"Window: 4",
"Bigrams: {(I like), (I kitties), (I and), (like kitties), (like and), (like doggies), (kitties and), (kitties doggies), (and doggies)}."
],
"extractive_spans": [],
"free_form_answer": "O(2**N)",
"highlighted_evidence": [
"Text: “I like kitties and doggies”\n\nWindow: 2\n\nBigrams: {(I like), (like kitties), (kitties and), (and doggies)} and this one:\n\nText: “I like kitties and doggies”\n\nWindow: 4\n\nBigrams: {(I like), (I kitties), (I and), (like kitties), (like and), (like doggies), (kitties and), (kitties doggies), (and doggies)}."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"db4f6f1ac73349bcebd4f6bf06de67906f18db9b",
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
},
{
"annotation_id": [
"7c6130366dfa70e9b39f0ecc5ef3c52a8b81a928",
"bcab7640269f1114852d7b276610239018a17556"
],
"answer": [
{
"evidence": [
"Bigram frequencies are often calculated using the approximation",
"$$freq(*, word) = freq(word, *) = freq(word)$$ (Eq. 1)",
"In a much cited paper, Church and Hanks BIBREF0 use ` $=$ ' in place of ` $\\approx $ ' because the approximation is so good. Indeed, this approximation will only cause errors for the very few words which occur near the beginning or the end of the text. Take for example the text appearing above - the bigram (doggies, *) does not occur once, but the approximation says it does.",
"An efficient method for computing the contingency matrix for a bigram (word1, word2) is suggested by the approximation. Store $freq(w1, w2)$ for all bigrams $(w1, w2)$ and the frequencies of all words. Then,",
"The statistical importance of miscalculations due to this method diminishes as our text grows larger and larger. Interest is growing in the analysis of small texts, however, and a means of computing bigrams for this type of corpus must be employed. This approximation is implemented in popular NLP libraries and can be seen in many tutorials across the internet. People who use this code, or write their own software, must know when it is appropriate."
],
"extractive_spans": [
"freq(*, word) = freq(word, *) = freq(word)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Bigram frequencies are often calculated using the approximation\n\n$$freq(*, word) = freq(word, *) = freq(word)$$ (Eq. 1)\n\nIn a much cited paper, Church and Hanks BIBREF0 use ` $=$ ' in place of ` $\\approx $ ' because the approximation is so good. Indeed, this approximation will only cause errors for the very few words which occur near the beginning or the end of the text. Take for example the text appearing above - the bigram (doggies, *) does not occur once, but the approximation says it does.\n\nAn efficient method for computing the contingency matrix for a bigram (word1, word2) is suggested by the approximation. Store $freq(w1, w2)$ for all bigrams $(w1, w2)$ and the frequencies of all words. Then,\n\nThe statistical importance of miscalculations due to this method diminishes as our text grows larger and larger. Interest is growing in the analysis of small texts, however, and a means of computing bigrams for this type of corpus must be employed. This approximation is implemented in popular NLP libraries and can be seen in many tutorials across the internet. People who use this code, or write their own software, must know when it is appropriate."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Bigram frequencies are often calculated using the approximation",
"$$freq(*, word) = freq(word, *) = freq(word)$$ (Eq. 1)",
"In a much cited paper, Church and Hanks BIBREF0 use ` $=$ ' in place of ` $\\approx $ ' because the approximation is so good. Indeed, this approximation will only cause errors for the very few words which occur near the beginning or the end of the text. Take for example the text appearing above - the bigram (doggies, *) does not occur once, but the approximation says it does."
],
"extractive_spans": [
"$$freq(*, word) = freq(word, *) = freq(word)$$ (Eq. 1)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Bigram frequencies are often calculated using the approximation\n\n$$freq(*, word) = freq(word, *) = freq(word)$$ (Eq. 1)\n\nIn a much cited paper, Church and Hanks BIBREF0 use ` $=$ ' in place of ` $\\approx $ ' because the approximation is so good."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"db4f6f1ac73349bcebd4f6bf06de67906f18db9b",
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is the computational complexity of old method",
"Could you tell me more about the old method?"
],
"question_id": [
"06d5de706348dbe8c29bfacb68ce65a2c55d0391",
"6014c2219d29bae17279625716e7c2a1f8a2bd05"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"efficient",
"efficient"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [],
"file": []
} | [
"What is the computational complexity of old method"
] | [
[
"1604.05559-Calculating Bigram Frequecies-5",
"1604.05559-Calculating Bigram Frequecies-2",
"1604.05559-Calculating Bigram Frequecies-3",
"1604.05559-Calculating Bigram Frequecies-1",
"1604.05559-Calculating Bigram Frequecies-6"
]
] | [
"O(2**N)"
] | 303 |
2002.03056 | autoNLP: NLP Feature Recommendations for Text Analytics Applications | While designing machine learning based text analytics applications, often, NLP data scientists manually determine which NLP features to use based upon their knowledge and experience with related problems. This results in increased efforts during feature engineering process and renders automated reuse of features across semantically related applications inherently difficult. In this paper, we argue for standardization in feature specification by outlining structure of a language for specifying NLP features and present an approach for their reuse across applications to increase likelihood of identifying optimal features. | {
"paragraphs": [
[
"For an ever increasing spectrum of applications (e.g., medical text analysis, opinion mining, sentiment analysis, social media text analysis, customer intelligence, fraud analytics etc.) mining and analysis of unstructured natural language text data is necessary BIBREF0, BIBREF1, BIBREF2.",
"One of key challenge while designing such text analytics (TA) applications is to identify right set of features. For example, for text classification problem, different sets of features have been considered in different works (spanning a history of more than twenty years) including `bag of words', `bag of phrases', `bag of n-grams', `WordNet based word generalizations', and `word embeddings' BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Even for recent end-to-end designs using deep neural networks, specification of core features remains manually driven BIBREF8, BIBREF9. During feature engineering, often data scientists manually determine which features to use based upon their experience and expertise with respect to the underlying application domain as well as state-of-the-art tools and techniques. Different tools (e.g., NLTK BIBREF10, Mallet BIBREF11, Stanford CoreNLP BIBREF12, Apache OpenNLP BIBREF13, Apache Lucene BIBREF14, etc.) available to a NLP data scientist for TA application design and development often differ in terms of support towards extraction of features, level of granularity at which feature extraction process is to be specified; and these tools often use different programing vocabularies to specify semantically equivalent features.",
"Currently, there is no generic method or approach, which can be applied during TA application's design process to define and extract features for any arbitrary application in an automated or semi-automated manner. Even there is no single way to express wide range of NLP features, resulting into increased efforts during feature engineering which has to start new for each data scientist and automated reuse of features across semantically similar or related applications designed by different data scientists is difficult. This also hinders foundational studies on NLP feature engineering including why certain features are more critical than others.",
"In this paper, we aim to present an approach towards automating NLP feature engineering. We start with an outline of a language for expressing NLP features abstracting over the feature extraction process, which often implicitly captures intent of the NLP data scientist to extract specific features from given input text. We next discuss a method to enable automated reuse of features across semantically related applications when a corpus of feature specifications for related applications is available. Proposed language and system would help achieving reduction in manual effort towards design and extraction of features, would ensure standardization in feature specification process, and could enable effective reuse of features across similar and/or related applications."
],
[
"Figure FIGREF1 depicts typical design life cycle of a (traditional) ML based solution for the TA applications, which involves steps to manually define relevant features and implement code components to extract those feature from input text corpus during training, validation, testing and actual usage of the application. In traditional ML based solutions, feature interactions also need to be explicitly specified, though this step is largely automated when using deep neural network based solutions BIBREF9.",
"As the process of defining features is manual, prior experience and expertize of the designer affects which features to extract and how to extract these features from input text. Current practice lacks standardization and automation in feature definition process, provides partial automation in extraction process, and does not enable automated reuse of features across related application.",
"Next, let us consider scenarios when features are specified as elements of a language. Let us refer to this language as NLP Feature Specification Language (nlpFSpL) such that a program in nlpFSpL would specify which features should be used by the underlying ML based solution to achieve goals of the target application. Given a corpus of unstructured natural language text data and a specifications in the nlpFSpL, an interpreter can be implemented as feature extraction system (FExSys) to automatically generate feature matrix which can be directly used by underlying ML technique.",
"In contrast to the life-cycle view in the Figure FIGREF1, this would result into refined solution life cycle of ML based TA applications as depicted in the Figure FIGREF2."
],
[
"Figure FIGREF4 specifies the meta elements of the nlpFSpL which are used by the FExSys while interpreting other features.",
"Analysis Unit (AU) specifies level at which features have to be extracted. At Corpus level, features are extracted for all the text documents together. At Document level, features are extracted for each document in corpus separately. At Para (paragraph) level Features are extracted for multiple sentences constituting paragraphs together. At Sentence level features to be extracted for each sentence. Figure FIGREF6 depicts classes of features considered in nlpFSpL and their association with different AUs.",
"Syntactic Unit (SU) specifies unit of linguistic features. It could be a `Word' or a `Phrase', or a `N-gram' or a sequence of words matching specific lexico-syntactic pattern captured as `POS tag pattern' (e.g., Hearst pattern BIBREF15) or a sequence of words matching specific regular expression `Regex' or a combination of these. Option Regex is used for special types of terms, e.g., Dates, Numbers, etc. LOGICAL is a Boolean logical operator including AND, OR and NOT (in conjunction with other operator). For example, Phrase AND POS Regex would specify inclusion of a `Phrase' as SU when its constituents also satisfy 'regex' of `POS tags'. Similarly, POS Regex OR NOT(Regex) specifies inclusion of sequence of words as SU if it satisfies `POS tag Pattern' but does not match pattern specified by character `Regex'. Note that SU can be a feature in itself for document and corpus level analysis.",
"Normalize Morphosyntactic Variants: If YES, variants of words including stems, lemmas, and fuzzy matches will be identified before analyzing input text for feature exaction and would be treated equivalent."
],
[
"Figure FIGREF8 depicts two levels of taxonomy for features considered as linguistic.",
"To illustrate, let us consider context based features: Table FIGREF9 gives various options which need to be specified for directing how context for an SU should be extracted. For example, Context_Window := [2, Sentence] will extract all tokens within current sentence, which are present within a distance of 2 on both sides from the current SU. However, Context_Window := [2, Sentence]; POSContext := NN$\\mid $VB will extract only those tokens within current sentence, which are present within a distance of 2 on both sides from the current SU and have POS tag either NN (noun singular) or VB (verb, base form).",
"Table FIGREF10 illustrates how to specify the way head directionality of current SU should be extracted."
],
[
"Semantic similarity can be estimated between words, between phrases, between sentences, and between documents in a corpus. Estimation could either be based upon corpus text alone by applying approaches like vector space modeling BIBREF16, latent semantic analysis BIBREF17, topic modeling BIBREF18, or neural embeddings (e.g., Word2Vec BIBREF19 or Glove BIBREF20) and their extensions to phrase, sentence, and document levels. Otherwise it can be estimated based upon ontological relationships (e.g., WordNet based BIBREF21) among concept terms appearing in the corpus."
],
[
"Figure FIGREF13 depicts different types of statistical features which can be extracted for individual documents or corpus of documents together with methods to extract these features at different levels.",
"In particular, examples of distributions which can be estimated include frequency distributions for terms, term distributions in topics and topic distributions within documents, and distribution of term inter-arrival delay, where inter arrival delay for a term measures number of terms occurring between two successive occurrences of a term."
],
[
"Let us consider a problem of identifying medical procedures being referenced in a medical report.",
"Sample Input (Discharge summary): ““This XYZ non-interventional study report from a medical professional. Spontaneous report from a physician on 01-Oct-1900. Patient condition is worsening day by day. Unknown if it was started before or after levetiracetam initiation. The patient had a history of narcolepsy as well as cataplexy. Patient condition has not recovered. On an unknown date patient was diagnosed to have epilepsy. The patient received the first dose of levetiracetam for seizure.\"",
"Table TABREF14 shows specification of features in nlpFSpL.",
"Table TABREF15 and Table TABREF16 contain corresponding feature matrix (limited to first sentence and does not involve any preprocessing of input text)."
],
[
"Next let us consider a case for enabling automated reuse of feature specifications in nlpFSpL across different semantically related applications."
],
[
"To illustrate that semantically different yet related applications may have significant potential for reuse of features, let us consider the problem of event extraction, which involves identifying occurrences of specific type of events or activities from raw text.",
"Towards that, we analysed published works on three different types of events in different domains as described next:",
"Objective of this study is to design a ML model for identifying if there exist mentions of one of the nine types of bio-molecular interactions in (publicly available) Biomedical data. To train SVM based classifier, authors use GENETAG database, which is a tagged corpus for gene/protein named entity recognition. BioNLP 2009 shared task test-set was used to estimate performance of the system. Further details can be found at BIBREF22.",
"Objective of the study was to design ML model for enabling automated detection of specific financial events in the news text. Ten different types of financial events were considered including announcements regarding CEOs, presidents, products, competitors, partners, subsidiaries, share values, revenues, profits, and losses. To train and test SVM and CRF based ML models, authors used data set consisting of 200 news messages extracted from the Yahoo! Business and Technology newsfeeds, having financial events and relations manually annotated by 3 domain experts. Further details can be found at BIBREF23.",
"Objective of the study was to design an ML based system for extracting open domain calendar of significant events from Twitter-data. 38 different types of events were considered for designing the system. To train the ML model, an annotated corpus of 1000 tweets (containing 19,484 tokens) was used and trained model was tested on 100 million most recent tweets. Further details can be found at BIBREF24.",
"Table TABREF21 below depicts classes of features selected by authors of these works (as described in the corresponding references above) to highlight the point that despite domain differences, these applications share similar sets of features. Since authors of these works did not cite each other, it is possible that that these features might have been identified independently. This, in turn, supports the hypothesis that if adequate details of any one or two of these applications are fed to a system described in this work, which is designed to estimate semantic similarities across applications, system can automatically suggest potential features for consideration for the remaining applications to start with without requiring manual knowledge of the semantically related applications."
],
[
"Figure FIGREF23 depicts overall process flow for enabling automated feature recommendations.",
"For a new text analytics application requiring feature engineering, it starts with estimating its semantic proximity (from the perspective of a NLP data scientist) with existing applications with known features. Based upon these proximity estimates as well as expected relevance of features for existing applications, system would recommend features for the new application in a ranked order. Furthermore, if user's selections are not aligned with system's recommendations, system gradually adapts its recommendation so that eventually it can achieve alignment with user preferences.",
"Towards that let us start with characterizing text analytics applications. A TA application's details should include following fields:",
"Text based description of an TA application (or problem). For example, “identify medical procedures being referenced in a discharge summary” or “what are the input and output entities mentioned in a software requirements specification”.",
"Analysis unit at which features are to be specified and training annotations are available, and ML model is designed to give outcomes. Options include word, phrase, sentence, paragraph, or document.",
"Specifies technical classification of the underlying ML challenge with respect to a well-defined ontology. E.g., Classification (with details), Clustering, etc.",
"Specifies how performance of the ML model is to be measured - again should be specified as per some well defined ontology.",
"Knowledge base of text analytics applications contains details for text analytics applications in above specified format. Each application is further assumed to be associated with a set of Features (or feature types) specified in nlpFSpL together with their relevance scores against a performance metric. Relevance score of a feature is a measure of the extent to which this feature contributes to achieve overall performance of ML model while solving the underlying application. Relevance score may be estimated using any of the known feature selection metrics BIBREF25.",
"To specify knowledge base formally, let us assume that there are $m$ different applications and $k$ unique feature specifications across these applications applying same performance metric. Let us denote these as follows: $APPS=\\lbrace App_1,\\ldots , App_m\\rbrace $ and ${\\mathit {\\Theta }}_F$ = $\\left\\lbrace F_1,F_2,\\dots ,F_k\\right\\rbrace $ respectively. Knowledge base is then represented as a feature co-occurrence matrix $PF_{m\\times k}$ such that $PF[i,j] = \\delta _{i,F_j}$ is the relevance score of $j^{th}$ feature specification ($F_j\\in \\mathit {\\Theta }_F$) for $i^{th}$ application $App_i\\in APPS$."
],
[
"To begin, for each text based field in each TA application, pre-process text and perform term normalization (i.e., replacing all equivalent terms with one representative term in the whole corpus) including stemming, short-form and long-form (e.g., ‘IP’ and ‘Intellectual Property’), language thesaurus based synonyms (e.g., WordNet based `goal' and `objective')."
],
[
"Thereafter, we identify potential `entity-terms' as `noun-phrases' and `action terms' as `verb-phrases' by applying POS-Tagging and Chunking. E.g., In sentence – “This XYZ non-interventional study report is prepared by a medical professional”, identifiable entity terms are “this XYZ non-interventional study report” and “medical professional” and identifiable functionality is `prepare'."
],
[
"Analyze the corpus of all unique words generated from the text based details across all applications in the knowledge base. Generally corpus of such textual details would be relatively small, therefore, one can potentially apply pre-trained word embeddings (e.g., word2vec BIBREF19 or Glove BIBREF20). Let $v(w)$ be the neural embedding of word $w$ in the corpus. We need to follow additional steps to generate term-level embeddings (alternate solutions also exist BIBREF26): Represent corpus into Salton's vector space model BIBREF16 and estimate information theoretic weighing for each word using BM25 BIBREF27 scheme: Let $BM25(w)$ be the weight for word $w$. Next update word embedding as $v(w)\\leftarrow BM25(w)\\times v(w)$. For each multi-word term $z=w_1\\dots w_n$, generate term embedding by averaging embeddings of constituent words: $v(z)\\leftarrow \\Sigma _{i=1}^{i=n}v(w_i)$.",
"In terms of these embeddings of terms, for each text based field of each application in the knowledge base, generate field level embedding as a triplet as follows: Let $f$ be a field of an application in $APPS$. Let the lists of entity-terms and action-terms in $f$ be $en(f)$ and $act(f)$ respectively. Let remaining words in $f$ be: $r(f)$. Estimate embedding for $f$ as: $v(f)$=$[v(en(f))$, $v(act(f))$, $v(r(f))]$, where $v(en(f))$=$\\Sigma _{z\\in en(f)} v(z)$, $v(act(f))$=$\\Sigma _{z\\in act(f)}v(z)$, and $v(r(f))$=$\\Sigma _{z\\in r(f)}v(z)$."
],
[
"After representing different fields of an application into embedding space (except AU), estimate field level similarity between two applications as follows: Let $[X_i^{en}$, $X^{act}_i$, $X_i^{r}]$ and $[X_j^{en}$, $X^{act}_j$, $X_j^{r}]$ be the representations for field $f$ for two applications $App_i$, $App_j$ $\\in APPS$. In terms of these, field level similarity is estimated as $\\Delta _{f}(App_i,App_j)$ = $[\\Delta _{en}({f_{i}, f_j})$, $\\Delta _{act}({f_{i}, f_j})$, $\\Delta _{r}({f_{i}, f_j})]$, where $\\Delta _{en}({f_{i}, f_j})$ = 0 if field level details of either of the applications is unavailable else $\\Delta _{en}({f_{i}, f_j})$ = $cosine(X_i^{en}$, $X_j^{en})$; etc.",
"For the field - AU, estimate $\\Delta (au_i,au_j)$ = $1 \\textit { if analysis units for both applications are same i.e., } au_i = au_j$ else 0.",
"In terms of these, let $\\Delta (App_i,App_j)$ = $[\\Delta _{en}({bd_{i}, bd_j})$, $\\Delta _{act}({bd_{i}, bd_j})$, $\\Delta _{r}({bd_{i}, bd_j}),$ $\\Delta _{en}({dd_{i}, f_j})$, $\\ldots $, $\\Delta ({au_{i}, au_j})]$ be overall similarity vector across fields, where $bd$ refers to the field `problem description' etc. Finally, estimate mean similarity across constituent fields as a proximity between corresponding applications."
],
[
"Let $NewP$ be new application for which features need to be specified in nlpFSpL. Represent fields of $NewP$ similar to existing applications in the knowledge base (as described earlier in the Section SECREF30).",
"Next, create a degree-1 ego-similarity network for $NewP$ to represent how close is $NewP$ with existing applications in $APPS$. Let this be represented as a diagonal matrix $\\Delta _{m\\times m}$ such that $\\Delta [r,r] = \\alpha _i =$ proximity between $NewP$ and $i^{th}$ application in the knowledge base (by applying steps in the Section SECREF31).",
"Thereafter, let $NorSim_{m\\times k}=\\Delta _{m\\times m} \\times PF_{m \\times k}$ such that $NorSim[i,j]= \\alpha _i \\delta _{i,F_j}$ measures probable relevance of feature $F_j$ for $NewP$ w.r.t. performance metric $M$ based upon its relevance for $App_i \\in APPS$. When there are multiple applications in $APPS$, we need to define a policy to determine collective probable relevance of a feature specification in $\\mathit {\\Theta }_F$ for $NewP$ based upon its probable relevance scores with respect to different applications.",
"To achieve that, let $Relevance$ $(NewP, f_j)$ be the relevance of $f_j$ for $NewP$ based upon a policy, which can be estimated in different ways including following:",
"Next, consider there different example policies:",
"Weakest relevance across applications is considered: $Relevance(NewP,F_j) = \\min _{i\\in 1..m}{NorSim[i,j]}$",
"Strongest relevance across applications is considered: $Relevance(NewP,F_j) = \\max _{i\\in 1..m}{NorSim[i,j]}$",
"Most likely relevance across applications is considered: $Relevance(NewP,F_j) = \\frac{1}{m}\\Sigma _{i\\in 1..m}{NorSim[i,j]}$",
"Rank feature specifications in $\\mathit {\\Theta }_F$ decreasing order based upon $Relevance(NewP,.)$, which are suggested to the NLP Data Scientist together with the supporting evidence."
],
[
"There are two different modes in which user may provide feedback to the system with respect to recommended features: one where it ranks features differently and second where user provides different relevance scores (e.g., based upon alternate design or by applying feature selection techniques). Aim is to use these feed-backs to learn an updated similarity scoring function $\\Delta _{new}:APPS \\times APPS$ $\\rightarrow $ $[0,1]$.",
"In relation to $NewP$, let $Rank:{\\mathrm {\\Theta }}_F\\times \\left\\lbrace system,user\\right\\rbrace \\rightarrow \\lbrace 1,\\dots ,k\\rbrace ~$ return rank of a feature and $Rel:{\\mathrm {\\Theta }}_F\\times \\left\\lbrace system,user\\right\\rbrace \\rightarrow [0,1]$ return relevance score of a feature based upon type – `system' or `user'.",
"Next, for each $App\\in APPS$, let ${Ch}\\left[App\\right]$ $\\leftarrow $ $\\emptyset $ be a hash table with keys as application ids and values as list of numbers estimated next. Also let $NewSim_{FE}\\left[.\\right]\\leftarrow 0$ contain updated similarity scores between $NewP$ and existing applications in $APPS$.",
"For each feature specification $f\\in \\Theta _F$, determine whether `user' given rank is different from `system' given rank, i.e., $Rank(f,`system^{\\prime }) \\ne Rank(f,`user^{\\prime })$. If so, execute steps next.",
"Let $\\mathit {Bind}(f_j) \\subseteq APPS$ be the list of applications, which contributed in estimating collective relevance for feature $f_j \\in \\Theta _F$. For example, when aggressive or conservative policy is considered, $\\mathit {Bind}(f_j)$ = $\\lbrace App_r \\mid Relevance(NewP,f_j)$ = $\\mathit {NorSim[r,j]}\\rbrace $.",
"For each $App_i\\in Bind\\left(f_j\\right)$: Add $x$ to $Ch[App_i]$, where $x$ is estimated as follows: If user provides explicit relevance scores for $App_i$,",
"Otherwise if user re-ranks features",
"For each $App_i\\in Bind\\left(f_j\\right)$: $NewSim_{FE}\\left[App_i\\right]\\leftarrow Average(Ch[App_i])$. If $|NewSim_{FE}[App_i]$-${\\alpha }_i|$ $\\ge \\epsilon {\\alpha }_i$ i.e., when the difference between old and new similarity scores is more than $\\epsilon $ fraction of original similarity, add $(\\Delta (NewP,App_i),NewSim_{FE}[App_i])$ to training set $Tr_{rpls}$ so that it used to train a regression model for $\\Delta _{new}(.,.)$ by applying partial recursive PLS BIBREF28 with $\\Delta (NewP,App_i)$ as set of predictor or independent variables and $NewSim_{FE}[App_i]$ as response variable. Existing proximity scores between applications in $APPS$ (ref. Section SECREF31) are also added to training set $Tr_{rpls}$ before generating the regression model.",
"Note that $\\epsilon $ is a small fraction $>$ 0 which controls when should similarity model be retrained. For example, $\\epsilon = 0.05$ would imply that if change in similarity is more than 5% only then it underlying model should use this feedback for retraining."
],
[
"In this paper, we have presented high level overview of a feature specification language for ML based TA applications and an approach to enable reuse of feature specifications across semantically related applications. Currently, there is no generic method or approach, which can be applied during TA applications' design process to define and extract features for any arbitrary application in an automated or semi-automated manner primarily because there is no standard way to specify wide range of features which can be extracted and used. We considered different classes of features including linguistic, semantic, and statistical for various levels of analysis including words, phrases, sentences, paragraphs, documents, and corpus. As a next step, we presented an approach for building a recommendation system for enabling automated reuse of features for new application scenarios which improves its underlying similarity model based upon user feedback.",
"To take this work forward, it is essential to have it integrated to a ML platform, which is being used by large user base for building TA applications so that to be able to populate a repository of statistically significant number of TA applications with details as specified in Section SECREF5 and thereafter refine the proposed approach so that eventually it rightly enables reuse of features across related applications."
]
],
"section_name": [
"Introduction",
"Life Cycle View",
"NLP Feature Specification Language ::: Meta Elements",
"NLP Feature Specification Language ::: Feature Types ::: Linguistic Features",
"NLP Feature Specification Language ::: Feature Types ::: Semantic Similarity and Relatedness based Features",
"NLP Feature Specification Language ::: Feature Types ::: Statistical Features",
"Illustration of nlpFSpL Specification",
"NLP Feature Reuse across TA Applications",
"NLP Feature Reuse across TA Applications ::: Illustrative Example",
"NLP Feature Reuse across TA Applications ::: Reuse Process",
"NLP Feature Reuse across TA Applications ::: Measuring Proximity between Applications",
"NLP Feature Reuse across TA Applications ::: Measuring Proximity between Applications ::: Identify key Terms",
"NLP Feature Reuse across TA Applications ::: Measuring Proximity between Applications ::: Generate Distributed Representations",
"NLP Feature Reuse across TA Applications ::: Measuring Proximity between Applications ::: Estimating Proximity between Applications",
"NLP Feature Reuse across TA Applications ::: Feature Recommendations",
"NLP Feature Reuse across TA Applications ::: Continuous Learning from User Interactions",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"2aa1bb30a493615a71ca3d470ceda3b297048f08",
"a945ac39f241da613c350f447b316db9f72901fc"
],
"answer": [
{
"evidence": [
"For a new text analytics application requiring feature engineering, it starts with estimating its semantic proximity (from the perspective of a NLP data scientist) with existing applications with known features. Based upon these proximity estimates as well as expected relevance of features for existing applications, system would recommend features for the new application in a ranked order. Furthermore, if user's selections are not aligned with system's recommendations, system gradually adapts its recommendation so that eventually it can achieve alignment with user preferences."
],
"extractive_spans": [
"estimating its semantic proximity (from the perspective of a NLP data scientist) with existing applications with known features",
"system would recommend features for the new application in a ranked order"
],
"free_form_answer": "",
"highlighted_evidence": [
"For a new text analytics application requiring feature engineering, it starts with estimating its semantic proximity (from the perspective of a NLP data scientist) with existing applications with known features. Based upon these proximity estimates as well as expected relevance of features for existing applications, system would recommend features for the new application in a ranked order."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For a new text analytics application requiring feature engineering, it starts with estimating its semantic proximity (from the perspective of a NLP data scientist) with existing applications with known features. Based upon these proximity estimates as well as expected relevance of features for existing applications, system would recommend features for the new application in a ranked order. Furthermore, if user's selections are not aligned with system's recommendations, system gradually adapts its recommendation so that eventually it can achieve alignment with user preferences."
],
"extractive_spans": [
"Based upon these proximity estimates as well as expected relevance of features for existing applications, system would recommend features for the new application in a ranked order"
],
"free_form_answer": "",
"highlighted_evidence": [
"For a new text analytics application requiring feature engineering, it starts with estimating its semantic proximity (from the perspective of a NLP data scientist) with existing applications with known features. Based upon these proximity estimates as well as expected relevance of features for existing applications, system would recommend features for the new application in a ranked order."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2c61c2b14722b2a73bd62133d1baa03e40ece116",
"f4951f983e790dc613c4326c39fa5482608fe762"
],
"answer": [
{
"evidence": [
"Table TABREF21 below depicts classes of features selected by authors of these works (as described in the corresponding references above) to highlight the point that despite domain differences, these applications share similar sets of features. Since authors of these works did not cite each other, it is possible that that these features might have been identified independently. This, in turn, supports the hypothesis that if adequate details of any one or two of these applications are fed to a system described in this work, which is designed to estimate semantic similarities across applications, system can automatically suggest potential features for consideration for the remaining applications to start with without requiring manual knowledge of the semantically related applications.",
"FLOAT SELECTED: Table 4: Illustration of similarity of features across related applications in different domains"
],
"extractive_spans": [],
"free_form_answer": "Applications share similar sets of features (of the 7 set of features, 6 selected are the same)",
"highlighted_evidence": [
"Table TABREF21 below depicts classes of features selected by authors of these works (as described in the corresponding references above) to highlight the point that despite domain differences, these applications share similar sets of features.",
"FLOAT SELECTED: Table 4: Illustration of similarity of features across related applications in different domains",
"applications share similar sets of features"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF21 below depicts classes of features selected by authors of these works (as described in the corresponding references above) to highlight the point that despite domain differences, these applications share similar sets of features. Since authors of these works did not cite each other, it is possible that that these features might have been identified independently. This, in turn, supports the hypothesis that if adequate details of any one or two of these applications are fed to a system described in this work, which is designed to estimate semantic similarities across applications, system can automatically suggest potential features for consideration for the remaining applications to start with without requiring manual knowledge of the semantically related applications.",
"FLOAT SELECTED: Table 4: Illustration of similarity of features across related applications in different domains"
],
"extractive_spans": [],
"free_form_answer": "Examples of common features are: N-gram, POS, Context based Features, Morphological Features, Orthographic, Dependency and Lexical",
"highlighted_evidence": [
"Table TABREF21 below depicts classes of features selected by authors of these works (as described in the corresponding references above) to highlight the point that despite domain differences, these applications share similar sets of features.",
"FLOAT SELECTED: Table 4: Illustration of similarity of features across related applications in different domains"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0db83660b3a54171fff79c08eb85988cd88df238",
"a5780f77f30955556badffcb3ccbfb6a12b0ffc1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 4: Association between Different Feature Types and Units of Analysis"
],
"extractive_spans": [],
"free_form_answer": "Linguistic, Semantic, and Statistical.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 4: Association between Different Feature Types and Units of Analysis"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"NLP Feature Specification Language ::: Feature Types ::: Linguistic Features",
"Figure FIGREF8 depicts two levels of taxonomy for features considered as linguistic.",
"NLP Feature Specification Language ::: Feature Types ::: Semantic Similarity and Relatedness based Features",
"Semantic similarity can be estimated between words, between phrases, between sentences, and between documents in a corpus. Estimation could either be based upon corpus text alone by applying approaches like vector space modeling BIBREF16, latent semantic analysis BIBREF17, topic modeling BIBREF18, or neural embeddings (e.g., Word2Vec BIBREF19 or Glove BIBREF20) and their extensions to phrase, sentence, and document levels. Otherwise it can be estimated based upon ontological relationships (e.g., WordNet based BIBREF21) among concept terms appearing in the corpus.",
"NLP Feature Specification Language ::: Feature Types ::: Statistical Features",
"Figure FIGREF13 depicts different types of statistical features which can be extracted for individual documents or corpus of documents together with methods to extract these features at different levels."
],
"extractive_spans": [
"Linguistic Features",
"Semantic Similarity and Relatedness based Features",
"Statistical Features"
],
"free_form_answer": "",
"highlighted_evidence": [
"Feature Types ::: Linguistic Features\nFigure FIGREF8 depicts two levels of taxonomy for features considered as linguistic.",
"Feature Types ::: Semantic Similarity and Relatedness based Features\nSemantic similarity can be estimated between words, between phrases, between sentences, and between documents in a corpus.",
"Feature Types ::: Statistical Features\nFigure FIGREF13 depicts different types of statistical features which can be extracted for individual documents or corpus of documents together with methods to extract these features at different levels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"222e1a85bc7a790aa09daf64a901ce9f1d59e196",
"623c2e76ff6fb4ea369b9294dc1c2083d1db16e5"
],
"answer": [
{
"evidence": [
"Analysis Unit (AU) specifies level at which features have to be extracted. At Corpus level, features are extracted for all the text documents together. At Document level, features are extracted for each document in corpus separately. At Para (paragraph) level Features are extracted for multiple sentences constituting paragraphs together. At Sentence level features to be extracted for each sentence. Figure FIGREF6 depicts classes of features considered in nlpFSpL and their association with different AUs.",
"Syntactic Unit (SU) specifies unit of linguistic features. It could be a `Word' or a `Phrase', or a `N-gram' or a sequence of words matching specific lexico-syntactic pattern captured as `POS tag pattern' (e.g., Hearst pattern BIBREF15) or a sequence of words matching specific regular expression `Regex' or a combination of these. Option Regex is used for special types of terms, e.g., Dates, Numbers, etc. LOGICAL is a Boolean logical operator including AND, OR and NOT (in conjunction with other operator). For example, Phrase AND POS Regex would specify inclusion of a `Phrase' as SU when its constituents also satisfy 'regex' of `POS tags'. Similarly, POS Regex OR NOT(Regex) specifies inclusion of sequence of words as SU if it satisfies `POS tag Pattern' but does not match pattern specified by character `Regex'. Note that SU can be a feature in itself for document and corpus level analysis.",
"Normalize Morphosyntactic Variants: If YES, variants of words including stems, lemmas, and fuzzy matches will be identified before analyzing input text for feature exaction and would be treated equivalent."
],
"extractive_spans": [
"Analysis Unit (AU)",
"Syntactic Unit (SU)",
"LOGICAL",
"Normalize Morphosyntactic Variants"
],
"free_form_answer": "",
"highlighted_evidence": [
"Analysis Unit (AU) specifies level at which features have to be extracted. At Corpus level, features are extracted for all the text documents together.",
"Syntactic Unit (SU) specifies unit of linguistic features.",
"LOGICAL is a Boolean logical operator including AND, OR and NOT (in conjunction with other operator).",
"Normalize Morphosyntactic Variants: If YES, variants of words including stems, lemmas, and fuzzy matches will be identified before analyzing input text for feature exaction and would be treated equivalent."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure FIGREF4 specifies the meta elements of the nlpFSpL which are used by the FExSys while interpreting other features.",
"Analysis Unit (AU) specifies level at which features have to be extracted. At Corpus level, features are extracted for all the text documents together. At Document level, features are extracted for each document in corpus separately. At Para (paragraph) level Features are extracted for multiple sentences constituting paragraphs together. At Sentence level features to be extracted for each sentence. Figure FIGREF6 depicts classes of features considered in nlpFSpL and their association with different AUs.",
"Syntactic Unit (SU) specifies unit of linguistic features. It could be a `Word' or a `Phrase', or a `N-gram' or a sequence of words matching specific lexico-syntactic pattern captured as `POS tag pattern' (e.g., Hearst pattern BIBREF15) or a sequence of words matching specific regular expression `Regex' or a combination of these. Option Regex is used for special types of terms, e.g., Dates, Numbers, etc. LOGICAL is a Boolean logical operator including AND, OR and NOT (in conjunction with other operator). For example, Phrase AND POS Regex would specify inclusion of a `Phrase' as SU when its constituents also satisfy 'regex' of `POS tags'. Similarly, POS Regex OR NOT(Regex) specifies inclusion of sequence of words as SU if it satisfies `POS tag Pattern' but does not match pattern specified by character `Regex'. Note that SU can be a feature in itself for document and corpus level analysis.",
"Normalize Morphosyntactic Variants: If YES, variants of words including stems, lemmas, and fuzzy matches will be identified before analyzing input text for feature exaction and would be treated equivalent."
],
"extractive_spans": [],
"free_form_answer": "Analysis Unit (AU) (Corpus level, Document level, Para (paragraph) level, Sentence level);\nSyntactic Unit (SU) (Word, Phrase, N-gram, Regex, POS Regex,);\nLOGICAL (AND, OR, AND NOT, OR NOT);\nNormalize Morphosyntactic Variants (yes or no).",
"highlighted_evidence": [
"Figure FIGREF4 specifies the meta elements of the nlpFSpL which are used by the FExSys while interpreting other features.\n\nAnalysis Unit (AU) specifies level at which features have to be extracted. At Corpus level, features are extracted for all the text documents together. At Document level, features are extracted for each document in corpus separately. At Para (paragraph) level Features are extracted for multiple sentences constituting paragraphs together. At Sentence level features to be extracted for each sentence. Figure FIGREF6 depicts classes of features considered in nlpFSpL and their association with different AUs.\n\nSyntactic Unit (SU) specifies unit of linguistic features. It could be a `Word' or a `Phrase', or a `N-gram' or a sequence of words matching specific lexico-syntactic pattern captured as `POS tag pattern' (e.g., Hearst pattern BIBREF15) or a sequence of words matching specific regular expression `Regex' or a combination of these. Option Regex is used for special types of terms, e.g., Dates, Numbers, etc. LOGICAL is a Boolean logical operator including AND, OR and NOT (in conjunction with other operator). For example, Phrase AND POS Regex would specify inclusion of a `Phrase' as SU when its constituents also satisfy 'regex' of `POS tags'. Similarly, POS Regex OR NOT(Regex) specifies inclusion of sequence of words as SU if it satisfies `POS tag Pattern' but does not match pattern specified by character `Regex'. Note that SU can be a feature in itself for document and corpus level analysis.\n\nNormalize Morphosyntactic Variants: If YES, variants of words including stems, lemmas, and fuzzy matches will be identified before analyzing input text for feature exaction and would be treated equivalent."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"How this system recommend features for the new application?",
"What is the similarity of manually selected features across related applications in different domains?",
"What type of features are extracted with this language?",
"What are meta elements of language for specifying NLP features?"
],
"question_id": [
"9be9354eeb2bb1827eeb1e23a20cfdca59fb349a",
"5d5c25d68988fa5effe546507c66997785070573",
"ca595151735444b5b30a003ee7f3a7eb36917208",
"a2edd0454026811223b8f31512bdae91159677be"
],
"question_writer": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Solution Life Cycle of a (traditional) ML based TA Application",
"Figure 2: Refined Solution Life Cycle of a ML based TA Applications",
"Figure 4: Association between Different Feature Types and Units of Analysis",
"Figure 5: Two level taxonomy of linguistic features",
"Figure 6: Context based Features",
"Table 1 shows specification of features in nlpFSpL.",
"Figure 8: Statistical Features together with methods to extract these features from text at different levels",
"Table 1: nlpFSpL Specification for Example 1",
"Table 2: Part I: Output Feature Matrix from FExSys based upon nlpFSpL Specification in Table 1",
"Table 3: Part II: Output Feature Matrix from FExSys based upon nlpFSpL Specification in Table 1 (Note: Col 1 is repeated from Table 2 for reading convenience)",
"Table 4: Illustration of similarity of features across related applications in different domains",
"Figure 9: High level process view for enabling automated transfer of features across semantically related applications"
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure4-1.png",
"4-Figure5-1.png",
"4-Figure6-1.png",
"5-Table1-1.png",
"5-Figure8-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"8-Figure9-1.png"
]
} | [
"What is the similarity of manually selected features across related applications in different domains?",
"What type of features are extracted with this language?",
"What are meta elements of language for specifying NLP features?"
] | [
[
"2002.03056-NLP Feature Reuse across TA Applications ::: Illustrative Example-5",
"2002.03056-7-Table4-1.png"
],
[
"2002.03056-NLP Feature Specification Language ::: Feature Types ::: Statistical Features-0",
"2002.03056-NLP Feature Specification Language ::: Feature Types ::: Semantic Similarity and Relatedness based Features-0",
"2002.03056-3-Figure4-1.png",
"2002.03056-NLP Feature Specification Language ::: Feature Types ::: Linguistic Features-0"
],
[
"2002.03056-NLP Feature Specification Language ::: Meta Elements-3",
"2002.03056-NLP Feature Specification Language ::: Meta Elements-2",
"2002.03056-NLP Feature Specification Language ::: Meta Elements-0",
"2002.03056-NLP Feature Specification Language ::: Meta Elements-1"
]
] | [
"Examples of common features are: N-gram, POS, Context based Features, Morphological Features, Orthographic, Dependency and Lexical",
"Linguistic, Semantic, and Statistical.",
"Analysis Unit (AU) (Corpus level, Document level, Para (paragraph) level, Sentence level);\nSyntactic Unit (SU) (Word, Phrase, N-gram, Regex, POS Regex,);\nLOGICAL (AND, OR, AND NOT, OR NOT);\nNormalize Morphosyntactic Variants (yes or no)."
] | 304 |
1904.02306 | A Simple Joint Model for Improved Contextual Neural Lemmatization | English verbs have multiple forms. For instance, talk may also appear as talks, talked or talking, depending on the context. The NLP task of lemmatization seeks to map these diverse forms back to a canonical one, known as the lemma. We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages from the Universal Dependencies corpora. Our paper describes the model in addition to training and decoding procedures. Error analysis indicates that joint morphological tagging and lemmatization is especially helpful in low-resource lemmatization and languages that display a larger degree of morphological complexity. Code and pre-trained models are available at https://sigmorphon.github.io/sharedtasks/2019/task2/. | {
"paragraphs": [
[
"* Equal contribution. Listing order is random.",
"Lemmatization is a core NLP task that involves a string-to-string transduction from an inflected word form to its citation form, known as the lemma. More concretely, consider the English sentence: The bulls are running in Pamplona. A lemmatizer will seek to map each word to a form you may find in a dictionary—for instance, mapping running to run. This linguistic normalization is important in several downstream NLP applications, especially for highly inflected languages. Lemmatization has previously been shown to improve recall for information retrieval BIBREF0 , BIBREF1 , to aid machine translation BIBREF2 , BIBREF3 and is a core part of modern parsing systems BIBREF4 , BIBREF5 .",
"However, the task is quite nuanced as the proper choice of the lemma is context dependent. For instance, in the sentence A running of the bulls took place in Pamplona, the word running is its own lemma, since, here, running is a noun rather than an inflected verb. Several counter-examples exist to this trend, as discussed in depth in haspelmath2013understanding. Thus, a good lemmatizer must make use of some representation of each word's sentential context. The research question in this work is, then, how do we design a lemmatization model that best extracts the morpho-syntax from the sentential context?",
"Recent work BIBREF7 has presented a system that directly summarizes the sentential context using a recurrent neural network to decide how to lemmatize. As N18-1126's system currently achieves state-of-the-art results, it must implicitly learn a contextual representation that encodes the necessary morpho-syntax, as such knowledge is requisite for the task. We contend, however, that rather than expecting the network to implicitly learn some notion of morpho-syntax, it is better to explicitly train a joint model to morphologically disambiguate and lemmatize. Indeed, to this end, we introduce a joint model for the introduction of morphology into a neural lemmatizer. A key feature of our model is its simplicity: Our contribution is to show how to stitch existing models together into a joint model, explaining how to train and decode the model. However, despite the model's simplicity, it still achieves a significant improvement over the state of the art on our target task: lemmatization.",
"Experimentally, our contributions are threefold. First, we show that our joint model achieves state-of-the-art results, outperforming (on average) all competing approaches on a 20-language subset of the Universal Dependencies (UD) corpora BIBREF8 . Second, by providing the joint model with gold morphological tags, we demonstrate that we are far from achieving the upper bound on performance—improvements on morphological tagging could lead to substantially better lemmatization. Finally, we provide a detailed error analysis indicating when and why morphological analysis helps lemmatization. We offer two tangible recommendations: one is better off using a joint model (i) for languages with fewer training data available and (ii) languages that have richer morphology.",
"Our system and pre-trained models on all languages in the latest version of the UD corpora are released at https://sigmorphon.github.io/sharedtasks/2019/task2/."
],
[
"Most languages BIBREF11 in the world exhibit a linguistic phenomenon known as inflectional morphology, which causes word forms to mutate according to the syntactic category of the word. The syntactic context in which the word form occurs determines which form is properly used. One privileged form in the set of inflections is called the lemma. We regard the lemma as a lexicographic convention, often used to better organize dictionaries. Thus, the choice of which inflected form is the lemma is motivated by tradition and convenience, e.g., the lemma is the infinitive for verbs in some Indo-European languages, Not in Latin. rather than by linguistic or cognitive concerns. Note that the stem differs from the lemma in that the stem may not be an actual inflection. In the NLP literature, the syntactic category that each inflected form encodes is called the morphological tag. The morphological tag generalizes traditional part-of-speech tags, enriching them with further linguistic knowledge such as tense, mood, and grammatical case. We call the individual key–attribute pairs morphological attributes.",
"An example of a sentence annotated with morphological tags and lemmata in context is given in fig:sentence. The task of mapping a sentence to a sequence of morphological tags is known as morphological tagging."
],
[
"The primary contribution of this paper is a joint model of morphological tagging and lemmatization. The intuition behind the joint model is simple: high-accuracy lemmatization requires a representation of the sentential context, in which the word occurs (this behind has been evinced in sec:introduction)—a morphological tag provides the precise summary of the context required to choose the correct lemma. Armed with this, we define our joint model of lemmatization and morphological tagging as: DISPLAYFORM0 ",
" fig:model illustrates the structure of our model in the form of a graphical model. We will discuss the lemmatization factor and the morphological tagging factor following two subsections, separately. We caution the reader that the discussion of these models will be brief: Neither of these particular components is novel with respect to the literature, so the formal details of the two models is best found in the original papers. The point of our paper is to describe a simple manner to combine these existing parts into a state-of-the-art lemmatizer."
],
[
"We employ a simple LSTM-based tagger to recover the morphology of a sentence BIBREF12 , BIBREF13 . We also experimented with the neural conditional random field of P18-1247, but E17-1048 gave slightly better tagging scores on average and is faster to train. Given a sequence of INLINEFORM0 words INLINEFORM1 , we would like to obtain the morphological tags INLINEFORM2 for each word, where INLINEFORM3 . The model first obtains a word representation for each token using a character-level biLSTM BIBREF14 embedder, which is then input to a word-level biLSTM tagger that predicts tags for each word. Given a function cLSTM that returns the last hidden state of a character-based LSTM, first we obtain a word representation INLINEFORM4 for word INLINEFORM5 as, DISPLAYFORM0 ",
"where INLINEFORM0 is the character sequence of the word. This representation INLINEFORM1 is then input to a word-level biLSTM tagger. The word-level biLSTM tagger predicts a tag from INLINEFORM2 . A full description of the model is found in E17-1048[author=ryan,color=violet!40,size=,fancyline,caption=,]For camera ready, add citation to my paper. I removed it for anonymity.. We use standard cross-entropy loss for training this model and decode greedily while predicting the tags during test-time. Note that greedy decoding is optimal in this tagger as there is no interdependence between the tags INLINEFORM3 ."
],
[
"Neural sequence-to-sequence models BIBREF15 , BIBREF16 have yielded state-of-the-art performance on the task of generating morphological variants—including the lemma—as evinced in several recent shared tasks on the subject BIBREF17 , BIBREF18 , BIBREF19 . Our lemmatization factor in eq:joint is based on such models. Specifically, we make use of a hard-attention mechanism BIBREF20 , BIBREF21 , rather than the original soft-attention mechanism. Our choice of hard attention is motivated by the performance of K18-3008's system at the CoNLL-SIGMORPHON task. We use a nearly identical model, but opt for an exact dynamic-programming-based inference scheme BIBREF22 .",
"We briefly describe the model here. Given an inflected word INLINEFORM0 and a tag INLINEFORM1 , we would like to obtain the lemma INLINEFORM2 , dropping the subscript for simplicity. Moreover, for the remainder of this section the subscripts will index into the character string INLINEFORM3 , that is INLINEFORM4 , where each INLINEFORM5 . A character-level biLSTM encoder embeds INLINEFORM6 to INLINEFORM7 . The decoder LSTM produces INLINEFORM8 , reading the concatenation of the embedding of the previous character INLINEFORM9 and the tag embedding INLINEFORM10 , which is produced by an order-invariant linear function. In contrast to soft attention, hard attention models the alignment distribution explicitly.",
"We denote INLINEFORM0 as the set of all monotonic alignments from INLINEFORM1 to INLINEFORM2 where an alignment aligns each target character INLINEFORM3 to exactly one source character in INLINEFORM4 and for INLINEFORM5 , INLINEFORM6 refers to the event that the INLINEFORM7 character of INLINEFORM8 is aligned to the INLINEFORM9 character of INLINEFORM10 . We factor the probabilistic lemmatizer as, DISPLAYFORM0 ",
"",
"The summation is computed with dynamic programming—specifically, using the forward algorithm for hidden Markov models BIBREF23 . INLINEFORM0 is a two-layer feed-forward network followed by a softmax. The transition INLINEFORM1 is the multiplicative attention function with INLINEFORM2 and INLINEFORM3 as input. To enforce monotonicity, INLINEFORM4 if INLINEFORM5 ."
],
[
"We consider two manners, by which we decode our model. The first is a greedy decoding scheme. The second is a crunching BIBREF24 scheme. We describe each in turn.",
"In the greedy scheme, we select the best morphological tag sequence DISPLAYFORM0 ",
"and then decode each lemmata DISPLAYFORM0 ",
"Note that we slightly abuse notation since the argmax here is approximate: exact decoding of our neural lemmatizer is hard. This sort of scheme is also referred to as pipeline decoding.",
"In the crunching scheme, we first extract a INLINEFORM0 -best list of taggings from the morphological tagger. For an input sentence INLINEFORM1 , call the INLINEFORM2 -best tags for the INLINEFORM3 word INLINEFORM4 . Crunching then says we should decode in the following manner DISPLAYFORM0 ",
" Crunching is a tractable heuristic that approximates true joint decoding and, as such, we expect it to outperform the more naïve greedy approach."
],
[
"In our model, a simple application of maximum-likelihood estimation (MLE) is unlikely to work well. The reason is that our model is a discriminative directed graphical model (as seen in fig:model) and, thus, suffers from exposure bias BIBREF25 . The intuition behind the poor performance of MLE is simple: the output of the lemmatizer depends on the output of the morphological tagger; as the lemmatizer has only ever seen correct morphological tags, it has never learned to adjust for the errors that will be made at the time of decoding. To compensate for this, we employ jackknifing BIBREF26 , which is standard practice in many NLP pipelines, such as dependency parsing.",
"Jackknifing for training NLP pipelines is quite similar to the oft-employed cross-validation. We divide our training data into INLINEFORM0 splits. Then, for each split INLINEFORM1 , we train the morphological tagger on the INLINEFORM2 split, and then decode it, using either greedy decoding or crunching, on the remaining INLINEFORM3 splits. This technique helps avoid exposure bias and improves the lemmatization performance, which we will demonstrate empirically in sec:exp. Indeed, the model is quite ineffective without this training regime. Note that we employ jackknifing for both the greedy decoding scheme and the crunching decoding scheme."
],
[
"To enable a fair comparison with N18-1126, we use the Universal Dependencies Treebanks BIBREF8 for all our experiments. Following previous work, we use v2.0 of the treebanks for all languages, except Dutch, for which v2.1 was used due to inconsistencies in v2.0. The standard splits are used for all treebanks."
],
[
"For the morphological tagger, we use the baseline implementation from P18-1247. This implementation uses an input layer and linear layer dimension of 128 and a 2-layer LSTM with a hidden layer dimension of 256. The Adam BIBREF27 optimizer is used for training and a dropout rate BIBREF28 of 0.3 is enforced during training. The tagger was trained for 10 epochs.",
"For the lemmatizer, we use a 2-layer biLSTM encoder and a 1-layer LSTM decoder with 400 hidden units. The dimensions of character and tag embedding are 200 and 40, respectively. We enforce a dropout rate of 0.4 in the embedding and encoder LSTM layers. The lemmatizer is also trained with Adam and the learning rate is 0.001. We halve the learning rate whenever the development log-likelihood increases and we perform early-stopping when the learning rate reaches INLINEFORM0 . We apply gradient clipping with a maximum gradient norm of 5."
],
[
"We compare our approach against recent competing methods that report results on UD datasets.",
"The current state of the art is held by N18-1126, who, as discussed in sec:introduction, provide a direct context-to-lemma approach, avoiding the use of morphological tags. We remark that N18-1126 assume a setting where lemmata are annotated at the token level, but morphological tags are not available; we contend, however, that such a setting is not entirely realistic as almost all corpora annotated with lemmata at the token level include morpho-syntactic annotation, including the vast majority of the UD corpora. Thus, we do not consider it a stretch to assume the annotation of morphological tags to train our joint model.",
"Our next baseline is the UDPipe system of K17-3009. Their system performs lemmatization using an averaged perceptron tagger that predicts a (lemma rule, UPOS) pair. Here, a lemma rule generates a lemma by removing parts of the word prefix/suffix and prepending and appending a new prefix/suffix. A guesser first produces correct lemma rules and the tagger is used to disambiguate from them.",
"The strongest non-neural baseline we consider is the system of D15-1272, who, like us, develop a joint model of morphological tagging lemmatization. In contrast to us, however, their model is globally normalized BIBREF29 . Due to their global normalization, they directly estimate the parameters of their model with MLE without worrying about exposure bias. However, in order to efficiently normalize the model, they heuristically limit the set of possible lemmata through the use of edit trees BIBREF30 , which makes the computation of the partition function tractable.",
"Much like D15-1272, Morfette relies on the concept of edit trees. However, a simple perceptron is used for classification with hand-crafted features. A full description of the model is given in grzegorz2008learning."
],
[
"Experimentally, we aim to show three points. i) Our joint model (eq:joint) of morphological tagging and lemmatization achieves state-of-the-art accuracy; this builds on the findings of N18-1126, who show that context significantly helps neural lemmatization. Moreover, the upper bound for contextual lemmatizers that make use of morphological tags is much higher, indicating room for improved lemmatization with better morphological taggers. ii) We discuss a number of error patterns that the model seems to make on the languages, where absolute accuracy is lowest: Latvian, Estonian and Arabic. We suggest possible paths forward to improve performance. iii) We offer an explanation for when our joint model does better than the context-to-lemma baseline. We show through a correlational study that our joint approach with morphological tagging helps the most in two cases: low-resource languages and morphologically rich languages."
],
[
"The first experiment we run focuses on pure performance of the model. Our goal is to determine whether joint morphological tagging and lemmatization improves average performance in a state-of-the-art neural model.",
"For measuring lemmatization performance, we measure the accuracy of guessing the lemmata correctly over an entire corpus. To demonstrate the effectiveness of our model in utilizing context and generalizing to unseen word forms, we follow N18-1126 and also report accuracies on tokens that are i) ambiguous, i.e., more than one lemmata exist for the same inflected form, ii) unseen, i.e., where the inflected form has not been seen in the training set, and iii) seen unambiguous, i.e., where the inflected form has only one lemma and is seen in the training set.",
"The results showing comparisons with all other methods are summarized in fig:results. Each bar represents the average accuracy across 20 languages. Our method achieves an average accuracy of INLINEFORM0 and the strongest baseline, N18-1126, achieves an average accuracy of INLINEFORM1 . The difference in performance ( INLINEFORM2 ) is statistically significant with INLINEFORM3 under a paired permutation test. We outperform the strongest baseline in 11 out of 20 languages and underperform in only 3 languages with INLINEFORM4 . The difference between our method and all other baselines is statistical significant with INLINEFORM5 in all cases. We highlight two additional features of the data. First, decoding using gold morphological tags gives an accuracy of INLINEFORM6 for a difference in performance of INLINEFORM7 . We take the large difference between the upper bound and the current performance of our model to indicate that improved morphological tagging is likely to significantly help lemmatization. Second, it is noteworthy that training with gold tags, but decoding with predicted tags, yields performance that is significantly worse than every baseline except for UDPipe. This speaks for the importance of jackknifing in the training of joint morphological tagger-lemmatizers that are directed and, therefore, suffer from exposure bias.",
"In fig:crunching, we observed crunching further improves performance of the greedy decoding scheme. In 8 out of 20 languages, the improvement is statistical significant with INLINEFORM0 . We select the best INLINEFORM1 for each language based on the development set.",
"In fig:error-analysis, we provide a language-wise breakdown of the performance of our model and the model of N18-1126. Our strongest improvements are seen in Latvian, Greek and Hungarian. When measuring performance solely over unseen inflected forms, we achieve even stronger gains over the baseline method in most languages. This demonstrates the generalization power of our model beyond word forms seen in the training set. In addition, our accuracies on ambiguous tokens are also seen to be higher than the baseline on average, with strong improvements on highly inflected languages such as Latvian and Russian. Finally, on seen unambiguous tokens, we note improvements that are similar across all languages."
],
[
"We attempt to identify systematic error patterns of our model in an effort to motivate future work. For this analysis, we compare predictions of our model and the gold lemmata on three languages with the weakest absolute performance: Estonian, Latvian and Arabic. First, we note the differences in the average lengths of gold lemmata in the tokens we guess incorrectly and all the tokens in the corpus. The lemmata we guess incorrectly are on average 1.04 characters longer than the average length of all the lemmata in the corpus. We found that the length of the incorrect lemmata does not correlate strongly with their frequency. Next, we identify the most common set of edit operations in each language that would transform the incorrect hypothesis to the gold lemma. This set of edit operations was found to follow a power-law distribution.",
"For the case of Latvian, we find that the operation {replace: s INLINEFORM0 a} is the most common error made by our model. This operation corresponds to a possible issue in the Latvian treebank, where adjectives were marked with gendered lemmas. This issue has now been resolved in the latest version of the treebank. For Estonian, the operation {insert: m, insert: a} is the most common error. The suffix -ma in Estonian is used to indicate the infinitive form of verbs. Gold lemmata for verbs in Estonian are marked in their infinitive forms whereas our system predicts the stems of these verbs instead. These inflected forms are usually ambiguous and we believe that the model doesn't generalize well to different form-lemma pairs, partly due to fewer training data available for Estonian. This is an example of an error pattern that could be corrected using improved morphological information about the tokens. Finally, in Arabic, we find that the most common error pattern corresponds to a single ambiguous word form, 'an , which can be lemmatized as 'anna (like “that” in English) or 'an (like “to” in English) depending on the usage of the word in context. The word 'anna must be followed by a nominal sentence while 'an is followed by a verb. Hence, models that can incorporate rich contextual information would be able to avoid such errors."
],
[
"Simply presenting improved results does not entirely satiate our curiosity: we would also like to understand why our model performs better. Specifically, we have assumed an additional level of supervision—namely, the annotation of morphological tags. We provide the differences between our method and our retraining of the Lematus system presented in tab:diffs. In addition to the performance of the systems, we also list the number of tokens in each treebank and the number of distinct morphological tags per language. We perform a correlational study, which is shown in tab:correlations.",
"We see that there is a moderate positive correlation ( INLINEFORM0 ) between the number of morphological tags in a language and the improvement our model obtains. As we take the number of tags as a proxy for the morphological complexity in the language, we view this as an indication that attempting to directly extract the relevant morpho-syntactic information from the corpus is not as effective when there is more to learn. In such languages, we recommend exploiting the additional annotation to achieve better results.",
"The second correlation we find is a stronger negative correlation ( INLINEFORM0 ) between the number of tokens available for training in the treebank and the gains in performance of our model over the baseline. This is further demonstrated by the learning curve plot in fig:learning, where we plot the validation accuracy on the Polish treebank for different sizes of the training set. The gap between the performance of our model and Lematus-ch20 is larger when fewer training data are available, especially for ambiguous tokens. This indicates that the incorporation of morphological tags into a model helps more in the low-resource setting. Indeed, this conclusion makes sense—neural networks are good at extracting features from text when there is a sufficiently large amount of data. However, in the low-resource case, we would expect direct supervision on the sort of features we desire to extract to work better. Thus, our second recommendation is to model tags jointly with lemmata when fewer training tokens are available. As we noted earlier, it is almost always the case that token-level annotation of lemmata comes with token-level annotation of morphological tags. In low-resource scenarios, a data augmentation approach such as the one proposed by BIBREF31 can be helpful and serve complementary to our approach."
],
[
"We have presented a simple joint model for morphological tagging and lemmatization and discussed techniques for training and decoding. Empirically, we have shown that our model achieves state-of-the-art results, hinting that explicitly modeling morphological tags is a more effective manner for modeling context. In addition to strong numbers, we tried to explain when and why our model does better. Specifically, we show a significant correlation between our scores and the number of tokens and tags present in a treebank. We take this to indicate that our method improves performance more for low-resource languages as well as morphologically rich languages."
],
[
"We thank Toms Bergmanis for his detailed feedback on the accepted version of the manuscript. Additionally, we would like to thank the three anonymous reviewers for their valuable suggestions. The last author would like to acknowledge support from a Facebook Fellowship."
],
[
"We present the exact numbers on all languages to allow future papers to compare to our results in tab:dev and tab:test."
]
],
"section_name": [
"Introduction",
"Background: Lemmatization",
"A Joint Neural Model",
"Morphological Tagger: p(𝐦∣𝐰)p(\\mathbf {m}\\mid \\mathbf {w})",
"A Lemmatizer: p(ℓ i ∣m i ,w i )p(\\ell _i \\mid m_i, w_i)",
"Decoding",
"Training with Jackknifing",
"Dataset",
"Training Setup and Hyperparameters",
"Baselines (and Related Work)",
"Results and Discussion",
"Main Results",
"Error Patterns",
"Why our model performs better?",
"Conclusion",
"Acknowledgments",
"Additional Results"
]
} | {
"answers": [
{
"annotation_id": [
"0dea6efcea9d44ad3e46da60897a8c46ad5b1e8d",
"c613d968be44d6a2dae45ebdc9a020dd168a104e"
],
"answer": [
{
"evidence": [
"Baselines (and Related Work)",
"We compare our approach against recent competing methods that report results on UD datasets.",
"The current state of the art is held by N18-1126, who, as discussed in sec:introduction, provide a direct context-to-lemma approach, avoiding the use of morphological tags. We remark that N18-1126 assume a setting where lemmata are annotated at the token level, but morphological tags are not available; we contend, however, that such a setting is not entirely realistic as almost all corpora annotated with lemmata at the token level include morpho-syntactic annotation, including the vast majority of the UD corpora. Thus, we do not consider it a stretch to assume the annotation of morphological tags to train our joint model.",
"Our next baseline is the UDPipe system of K17-3009. Their system performs lemmatization using an averaged perceptron tagger that predicts a (lemma rule, UPOS) pair. Here, a lemma rule generates a lemma by removing parts of the word prefix/suffix and prepending and appending a new prefix/suffix. A guesser first produces correct lemma rules and the tagger is used to disambiguate from them.",
"The strongest non-neural baseline we consider is the system of D15-1272, who, like us, develop a joint model of morphological tagging lemmatization. In contrast to us, however, their model is globally normalized BIBREF29 . Due to their global normalization, they directly estimate the parameters of their model with MLE without worrying about exposure bias. However, in order to efficiently normalize the model, they heuristically limit the set of possible lemmata through the use of edit trees BIBREF30 , which makes the computation of the partition function tractable.",
"Much like D15-1272, Morfette relies on the concept of edit trees. However, a simple perceptron is used for classification with hand-crafted features. A full description of the model is given in grzegorz2008learning."
],
"extractive_spans": [
"N18-1126",
"UDPipe",
"D15-1272",
"Morfette"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines (and Related Work)\nWe compare our approach against recent competing methods that report results on UD datasets.\n\nThe current state of the art is held by N18-1126, who, as discussed in sec:introduction, provide a direct context-to-lemma approach, avoiding the use of morphological tags. ",
"Our next baseline is the UDPipe system of K17-3009. ",
"The strongest non-neural baseline we consider is the system of D15-1272, who, like us, develop a joint model of morphological tagging lemmatization. ",
"Much like D15-1272, Morfette relies on the concept of edit trees. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baselines (and Related Work)",
"We compare our approach against recent competing methods that report results on UD datasets.",
"The current state of the art is held by N18-1126, who, as discussed in sec:introduction, provide a direct context-to-lemma approach, avoiding the use of morphological tags. We remark that N18-1126 assume a setting where lemmata are annotated at the token level, but morphological tags are not available; we contend, however, that such a setting is not entirely realistic as almost all corpora annotated with lemmata at the token level include morpho-syntactic annotation, including the vast majority of the UD corpora. Thus, we do not consider it a stretch to assume the annotation of morphological tags to train our joint model.",
"Our next baseline is the UDPipe system of K17-3009. Their system performs lemmatization using an averaged perceptron tagger that predicts a (lemma rule, UPOS) pair. Here, a lemma rule generates a lemma by removing parts of the word prefix/suffix and prepending and appending a new prefix/suffix. A guesser first produces correct lemma rules and the tagger is used to disambiguate from them.",
"The strongest non-neural baseline we consider is the system of D15-1272, who, like us, develop a joint model of morphological tagging lemmatization. In contrast to us, however, their model is globally normalized BIBREF29 . Due to their global normalization, they directly estimate the parameters of their model with MLE without worrying about exposure bias. However, in order to efficiently normalize the model, they heuristically limit the set of possible lemmata through the use of edit trees BIBREF30 , which makes the computation of the partition function tractable.",
"Much like D15-1272, Morfette relies on the concept of edit trees. However, a simple perceptron is used for classification with hand-crafted features. A full description of the model is given in grzegorz2008learning."
],
"extractive_spans": [
"N18-1126",
"UDPipe system of K17-3009",
"D15-1272",
"Morfette"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines (and Related Work)\nWe compare our approach against recent competing methods that report results on UD datasets.\n\nThe current state of the art is held by N18-1126, who, as discussed in sec:introduction, provide a direct context-to-lemma approach, avoiding the use of morphological tags. We remark that N18-1126 assume a setting where lemmata are annotated at the token level, but morphological tags are not available; we contend, however, that such a setting is not entirely realistic as almost all corpora annotated with lemmata at the token level include morpho-syntactic annotation, including the vast majority of the UD corpora. Thus, we do not consider it a stretch to assume the annotation of morphological tags to train our joint model.\n\nOur next baseline is the UDPipe system of K17-3009. Their system performs lemmatization using an averaged perceptron tagger that predicts a (lemma rule, UPOS) pair. Here, a lemma rule generates a lemma by removing parts of the word prefix/suffix and prepending and appending a new prefix/suffix. A guesser first produces correct lemma rules and the tagger is used to disambiguate from them.\n\nThe strongest non-neural baseline we consider is the system of D15-1272, who, like us, develop a joint model of morphological tagging lemmatization. In contrast to us, however, their model is globally normalized BIBREF29 . Due to their global normalization, they directly estimate the parameters of their model with MLE without worrying about exposure bias. However, in order to efficiently normalize the model, they heuristically limit the set of possible lemmata through the use of edit trees BIBREF30 , which makes the computation of the partition function tractable.\n\nMuch like D15-1272, Morfette relies on the concept of edit trees. However, a simple perceptron is used for classification with hand-crafted features. A full description of the model is given in grzegorz2008learning."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1fe5d6d5ec22b22a11f1f0782372982c22a0389e",
"b52cb3289cd33364e19c105f28d78bfb1efc2efb"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Development performance breakdown."
],
"extractive_spans": [],
"free_form_answer": "They experiment with: arabic, basque, croatian, dutch, estonian, finnish, german, greek, hindi, hungarian, italian, latvian, polish, portuguese, romanian, russian, slovak, slovenian, turkish and urdu.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Development performance breakdown."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Here we present the number of tokens in each of the UD treebanks we use as well as the number of morphological tags. Note, we take the number of tags as a proxy for the morphological complexity of the language. Finally, we present numbers on validation set from our method with greedy decoding and from the strongest baseline (Lematus) as well as the difference. Correlations between the first two columns and the differences are shown in Table 2."
],
"extractive_spans": [],
"free_form_answer": "Arabic, Basque, Croatian, Dutch, Estonian, Finnish, German, Greek, Hindi, Hungarian, Italian, Latvian, Polish, Portuguese, Romanian, Russian, Slovak, Slovenian, Turkish, Urdu",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Here we present the number of tokens in each of the UD treebanks we use as well as the number of morphological tags. Note, we take the number of tags as a proxy for the morphological complexity of the language. Finally, we present numbers on validation set from our method with greedy decoding and from the strongest baseline (Lematus) as well as the difference. Correlations between the first two columns and the differences are shown in Table 2."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what previous work do they also look at?",
"what languages did they experiment with?"
],
"question_id": [
"3b4077776f4e828f0d1687d0ce8018c9bce4fdc6",
"d1a88fe6655c742421da93cf88b5c541c09866d6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: Our structured neural model shown as a hybrid (directed-undirected) graphical model (Koller and Friedman, 2009). Notionally, the wi denote inflected word forms, the mi denote morphological tags and the `i denote lemmata.",
"Figure 2: Example of a morphologically tagged (in purple) and lemmatized (in red) sentence in Russian using the annotation scheme provided in the UD dataset. The translation is given below (in blue).",
"Figure 3: We present performance (in accuracy) averaged over the 20 languages from UD we consider. Our method (second from the left) significantly outperforms the strongest baseline (fourth from the left; Bergmanis and Goldwater (2018)). The blue column is a skyline that gives our model gold tags during decoding, showing improved tagging should lead to better lemmatization. The remaining are baselines described in §4.3.",
"Figure 4: Relative improvement on validation set with crunching over greedy decoding for different values of k.",
"Figure 5: Dev accuracy breakdown by type of inflected form on all languages comparing our system with greedy decoding against our run of Lematus-ch20, colored by relative improvement in percentage. In each entry, the bottom score is from Lematus-ch20 and the top one is from our system, and the number in the parenthesis is the number of tokens for the corresponding setting.",
"Figure 6: Learning curve showing the accuracy on the validation set of the Polish treebank as the percentage of training set is increased. Markers indicate statistical significant better system with paired permutation test (p < 0.05). Our model is decoded greedily.",
"Table 1: Here we present the number of tokens in each of the UD treebanks we use as well as the number of morphological tags. Note, we take the number of tags as a proxy for the morphological complexity of the language. Finally, we present numbers on validation set from our method with greedy decoding and from the strongest baseline (Lematus) as well as the difference. Correlations between the first two columns and the differences are shown in Table 2.",
"Table 2: The table shows the correlations between the differences in dev performance between our model with greedy decoding and Lematus and two aspects of the data: number of tokens and number of tags.",
"Table 3: Development performance breakdown.",
"Table 4: Test performance breakdown.",
"Table 5: Morphological Tagging Performance on development set."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png",
"8-Figure6-1.png",
"8-Table1-1.png",
"9-Table2-1.png",
"11-Table3-1.png",
"11-Table4-1.png",
"12-Table5-1.png"
]
} | [
"what languages did they experiment with?"
] | [
[
"1904.02306-8-Table1-1.png",
"1904.02306-11-Table3-1.png"
]
] | [
"Arabic, Basque, Croatian, Dutch, Estonian, Finnish, German, Greek, Hindi, Hungarian, Italian, Latvian, Polish, Portuguese, Romanian, Russian, Slovak, Slovenian, Turkish, Urdu"
] | 305 |
2003.00864 | Pathological speech detection using x-vector embeddings | The potential of speech as a non-invasive biomarker to assess a speaker's health has been repeatedly supported by the results of multiple works, for both physical and psychological conditions. Traditional systems for speech-based disease classification have focused on carefully designed knowledge-based features. However, these features may not represent the disease's full symptomatology, and may even overlook its more subtle manifestations. This has prompted researchers to move in the direction of general speaker representations that inherently model symptoms, such as Gaussian Supervectors, i-vectors and, x-vectors. In this work, we focus on the latter, to assess their applicability as a general feature extraction method to the detection of Parkinson's disease (PD) and obstructive sleep apnea (OSA). We test our approach against knowledge-based features and i-vectors, and report results for two European Portuguese corpora, for OSA and PD, as well as for an additional Spanish corpus for PD. Both x-vector and i-vector models were trained with an out-of-domain European Portuguese corpus. Our results show that x-vectors are able to perform better than knowledge-based features in same-language corpora. Moreover, while x-vectors performed similarly to i-vectors in matched conditions, they significantly outperform them when domain-mismatch occurs. | {
"paragraphs": [
[
"Recent advances in Machine Learning (ML) and, in particular, in Deep Neural Networks (DNN) have allowed the development of highly accurate predictive systems for numerous applications. Among others, health has received significant attention due to the potential of ML-based diagnostic, monitoring and therapeutic systems, which are fast (when compared to traditional diagnostic processes), easily distributed and cheap to implement (many such systems can be executed in mobile devices). Furthermore, these systems can incorporate biometric data to perform non-invasive diagnostics.",
"Among other data types, speech has been proposed as a valuable biomarker for the detection of a myriad of diseases, including: neurological conditions, such as Alzheimer’s BIBREF0, Parkinson’s disease (PD) BIBREF1 and Amyotrophic Lateral Sclerosis BIBREF2; mood disorders, such as depression, anxiety BIBREF3 and bipolar disorder BIBREF4; respiratory diseases, such as obstructive sleep apnea (OSA) BIBREF5. However, temporal and financial constraints, lack of awareness in the medical community, ethical issues and patient-privacy laws make the acquisition of medical data one of the greatest obstacles to the development of health-related speech-based classifiers, particularly for deep learning models. For this reason, most systems rely on knowledge-based (KB) features, carefully designed and selected to model disease symptoms, in combination with simple machine learning models (e.g. Linear classifiers, Support Vector Machines). KB features may not encompass subtler symptoms of the disease, nor be general enough to cover varying levels of severity of the disease. To overcome this limitation, some works have instead focused on speaker representation models, such as Gaussian Supervectors and i-vectors. For instance, Garcia et al. BIBREF1 proposed the use of i-vectors for PD classification and Laaridh et al. BIBREF6 applied the i-vector paradigm to the automatic prediction of several dysarthric speech evaluation metrics like intelligibility, severity, and articulation impairment. The intuition behind the use of these representations is the fact that these algorithms model speaker variability, which should include disease symptoms BIBREF1.",
"Proposed by Snyder et al., x-vectors are discriminative deep neural network-based speaker embeddings, that have outperformed i-vectors in tasks such as speaker and language recognition BIBREF7, BIBREF8, BIBREF9. Even though it may not be evident that discriminative data representations are suitable for disease detection when trained with general datasets (that do not necessarily include diseased patients), recent works have shown otherwise. X-vectors have been successfully applied to paralinguistic tasks such as emotion recognition BIBREF10, age and gender classification BIBREF11, the detection of obstructive sleep apnea BIBREF12 and as a complement to the detection of Alzheimer's Disease BIBREF0. Following this line of research, in this work we study the hypothesis that speaker characteristics embedded in x-vectors extracted from a single network, trained for speaker identification using general data, contain sufficient information to allow the detection of multiple diseases. Moreover, we aim to assess if this information is kept even when language mismatch is present, as has already been shown to be true for speaker recognition BIBREF8. In particular, we use the x-vector model as a feature extractor, to train Support Vector Machines for the detection of two speech-affecting diseases: Parkinson's disease (PD) and obstructive sleep apnea (OSA).",
"PD is the second most common neurodegenerative disorder of mid-to-late life after Alzheimer’s disease BIBREF13, affecting 1% of people over the age of 65. Common symptoms include bradykinesia (slowness or difficulty to perform movements), muscular rigidity, rest tremor, as well as postural and gait impairment. 89% of PD patients develop also speech disorders, typically hypokinetic dysarthria, which translates into symptoms such as reduced loudness, monoloudness, monopitch, hypotonicity, breathy and hoarse voice quality, and imprecise articulation BIBREF14BIBREF15.",
"OSA is a sleep-concerned breathing disorder characterized by a complete stop or decrease of the airflow, despite continued or increased inspiratory efforts BIBREF16. This disorder has a prevalence that ranges from 9% to 38% through different populations BIBREF17, with higher incidence in male and elderly groups. OSA causes mood and personality changes, depression, cognitive impairment, excessive daytime sleepiness, thus reducing the patients' quality of life BIBREF18, BIBREF19. It is also associated with diabetes, hypertension and cardiovascular diseases BIBREF16, BIBREF20. Moreover, undiagnosed sleep apnea can have a serious economic impact, having had an estimated cost of $\\$150$ billion in the U.S, in 2015 BIBREF21. Considering the prevalence and serious nature of the two diseases described above, speech-based technology that tests for their existence has the potential to become a key tool for early detection, monitoring and prevention of these conditions BIBREF22.",
"The remainder of this document is organized as follows. Section SECREF2 presents the background concepts on speaker embeddings, and in particular on x-vectors. Section SECREF3 introduces the experimental setup: the corpora, the tasks, the KB features and the speaker embeddings employed. The results are presented and discussed in section SECREF4. Finally, section SECREF5 summarizes the main conclusions and suggests possible directions for future work."
],
[
"Speaker embeddings are fixed-length representations of a variable length speech signal, which capture relevant information about the speaker. Traditional speaker representations include Gaussian Supervectors BIBREF23 obtained from MAP adapted GMM-UBM BIBREF24 and i-vectors BIBREF25.",
"Until recently, i-vectors have been considered the state-of-the-art method for speaker recognition. An extension of the GMM Supervector, the i-vector approach models the variability present in the Supervector, as a low-rank total variability space. Using factor analysis, it is possible to extract low-dimensional total variability factors, called i-vectors, that provide a powerful and compact representation of speech segments BIBREF23, BIBREF25, BIBREF26. In their work, Hauptman et. al. BIBREF1 have noted that using i-vectors, that model the total variability space and total speaker variability, produces a representation that also includes information about speech disorders. To classify healthy and non-healthy speakers, the authors created a reference i-vector for the healthy population and another for the PD patients. Each speaker was then classified according to the distance between their i-vector to the reference i-vector of each class.",
"As stated in Section SECREF1, x-vectors are deep neural network-based speaker embeddings that were originally proposed by BIBREF8 as an alternative to i-vectors for speaker and language recognition. In contrast with i-vectors, that represent the total speaker and channel variability, x-vectors aim to model characteristics that discriminate between speakers. When compared to i-vectors, x-vectors require shorter temporal segments to achieve good results, and have been shown to be more robust to data variability and domain mismatches BIBREF8.",
"The x-vector system, described in detail in BIBREF7, has three main blocks. The first block is a set of five time-delay layers which operate at frame level, with a small temporal context. These layers work as a 1-dimensional convolution, with a kernel size corresponding to the temporal context. The second block, a statistical pooling layer, aggregates the information across the time dimension and outputs a summary for the entire speech segment. In this work, we implemented the attentive statistical pooling layer, proposed by Okabe et al. BIBREF27. The attention mechanism is used to weigh frames according to their importance when computing segment level statistics. The third and final block is a set of fully connected layers, from which x-vector embeddings can be extracted."
],
[
"Four corpora were used in our experiments: three to determine the presence or absence of PD and OSA, which include a European Portuguese PD corpus (PPD), a European Portuguese OSA corpus (POSA) and a Spanish PD corpus (SPD); one task-agnostic European Portuguese corpus to train the i-vector and x-vector extractors. For each of the disease-related datasets, we compared three distinct data representations: knowledge-based features, i-vectors and x-vectors. All disease classifications were performed with an SVM classifier. Further details on the corpora, data representations and classification method follow bellow."
],
[
"This corpus is a subset of the EASR (Elderly Automatic Speech Recognition) corpus BIBREF28. It includes recordings of European Portuguese read sentences. It was used to train the i-vector and the x-vector models, for speaker recognition tasks. This corpus includes speakers with ages ranging from 24 to 91, 91% of which in the age range of 60-80. This dataset was selected with the goal of generating speaker embeddings with strong discriminative power in this age range, as is characteristic of the diseases addressed in this work. The corpus was partitioned as 0.70:0.15:0.15 for training, development and test, respectively."
],
[
"The PPD corpus corresponds to a subset of the FraLusoPark corpus BIBREF29, which contains speech recordings of French and European Portuguese healthy volunteers and PD patients, on and off medication. For our experiments, we selected the utterances corresponding to European Portuguese speakers reading prosodic sentences. Only on-medication recordings of the patients were used."
],
[
"This dataset corresponds to a subset of the New Spanish Parkinson's Disease Corpus, collected at the Universidad de Antioquia, Colombia BIBREF22. For this work, we selected the corpus' subset of read sentences. This corpus was included in our work to test whether x-vector representations trained in one language (European Portuguese) are able to generalize to other languages (Spanish)."
],
[
"This corpus is an extended version of the Portuguese Sleep Disorders (PSD) corpus (a detailed description of which can be found in BIBREF30). It includes three tasks spoken in European Portuguese: reading a phonetically rich text; read sentences recorded during a task for cognitive load assessment; and a spontaneous description of an image.",
"All utterances were split into 4 second-long segments using overlapping windows, with a shift of 2 seconds. Further details about each of these datasets can be found in Table TABREF8."
],
[
"Proposed by Pompili et al. BIBREF13, the KB feature set used for PD classification contains 36 features common to eGeMAPS BIBREF31 alongside with the mean and standard deviation (std.) of 12 Mel frequency cepstral coefficients (MFCCs) + log-energy, and their corresponding first and second derivatives, resulting in a 114-dimensional feature vector."
],
[
"For this task, we use the KB feature set proposed in BIBREF30, consisting of: mean of 12 MFCCs, plus their first and second order derivatives and 48 linear prediction cepstral coefficients; mean and std of the frequency and bandwidth of formant 1, 2, and 3; mean and std of Harmonics-to-noise ratio; mean and std of jitter; mean, std, and percentile 20, 50, and 100 of F0; and mean and std of all frames and of only voiced frames of Spectral Flux.",
"All KB features were extracted using openSMILE BIBREF32."
],
[
"Following the configuration of BIBREF1, we provide as inputs to the i-vector system 20-dimensional feature vectors composed of 19 MFCCs + log-energy, extracted using a frame-length of 30ms, with 15ms shift. Each frame was mean-normalized over a sliding window of up to 4 seconds. All non-speech frames were removed using energy-based Voice Activity Detection (VAD). Utterances were modelled with a 512 component full-covariance GMM. i-vectors were defined as 180-dimensional feature vectors. All steps were performed with Kaldi BIBREF33 over the PT-EASR corpus."
],
[
"The architecture used for the x-vector network is detailed in Table TABREF15, where F corresponds to the number of input features and T corresponds to the total number of frames in the utterance, S to the number of speakers and Ctx stands for context. X-vectors are extracted from segment layer 6. The inputs to this network consist of 24-dimensional filter-bank energy vectors, extracted with Kaldi BIBREF33 using default values for window size and shift. Similar to what was done for the i-vector extraction, non-speech frames were filtered out using energy-based VAD. The extractor network was trained using the PT-EASR corpus for speaker identification, with: 100 epochs; the cross-entropy loss; a learning rate of 0.001; a learning rate decay of 0.05 with a 30 epoch period; a batch size of 512; and a dropout value of 0.001."
],
[
"Nine classification tasks (three data representations for each of the three datasets) were performed with SVM classifiers. The hyper-parameters used to train each classifier, detailed in table TABREF17, were selected through grid-search.",
"Considering the limited size of the corpora, fewer than 3h each, we chose to use leave-one-speaker-out cross validation as an alternative to partitioning the corpora into train, development and test sets. This was done to add significance to our results.",
"We perform classification at the segment level and assign speakers a final classification by means of a weighted majority vote, where the predictions obtained for each segment uttered by the speaker were weighted by the corresponding number of speech frames."
],
[
"This section contains the results obtained for all three tasks: PD detection with the PPD corpus, OSA detection with the PSD corpus and PD detection with the SPD corpus. Results are reported in terms of average Precision, Recall and F1 Score. The values highlighted in Tables TABREF19, TABREF21 and TABREF23 represent the best results, both at the speaker and segment levels."
],
[
"Results for PD classification with the PPD corpus are presented in Table TABREF19. The table shows that speaker representations learnt from out-of-domain data outperform KB features. This supports our hypothesis that speaker discriminative representations not only contain information about speech pathologies, but are also able to model symptoms of the disease that KB features fail to include. It is also possible to notice that x-vectors and i-vectors achieve very similar results, albeit x-vectors present a small improvement at the segment level, whereas i-vectors achieve slightly better results at the speaker level. A possible interpretation is the fact that, while x-vectors provide stronger representations for short segments, some works have shown that i-vectors may perform better when considering longer segments BIBREF8. As such, performing a majority vote weighted by the duration of speech segments may be giving an advantage to the i-vector approach at the speaker level."
],
[
"Table TABREF21 contains the results for OSA detection with the PSD corpus. For this task, x-vectors outperform all other approaches at the segment level, most importantly they significantly outperform KB features by $\\sim $8%, which further supports our hypothesis. Nevertheless, it is important to point out that both approaches perform similarly at the speaker level. Additionally, we can see that i-vectors perform worse than KB features. One possible justification, is the fact that the PSD corpus includes tasks - such as spontaneous speech - that do not match the read sentences included in the corpus used to train the i-vector and x-vector extractors. These tasks may thus be considered out-of-domain, which would explain why x-vectors are able to surpass the i-vector approach."
],
[
"Table TABREF23 presents the results achieved for the classification of SPD corpus. This experiment was designed to assess the suitability of x-vectors trained in one language and being applied to disease classification in a different language. Our results show that KB features outperform both speaker representations. This is most likely caused by the language mismatch between the Spanish PD corpus and the European Portuguese training corpus. Nonetheless, it should be noted that, as in the previous task, x-vectors are able to surpass i-vectors in an out-of-domain corpus."
],
[
"In this work we studied the suitability of task-agnostic speaker representations to replace knowledge-based features in multiple disease detection. Our main focus laid in x-vectors embeddings, trained with elderly speech data.",
"Our experiments with the European Portuguese datasets support the hypothesis that discriminative speaker embeddings contain information relevant for disease detection. In particular, we found evidence that these embeddings contain information that KB features fail to represent, thus proving the validity of our approach. It was also observed that x-vectors are more suitable than i-vectors for tasks whose domain does not match that of the training data, such as verbal task mismatch and cross-lingual experiments. This indicates that x-vectors embeddings are a strong contender in the replacement of knowledge-based feature sets for PD and OSA detection.",
"As future work, we suggest training the x-vector network with augmented data and with multilingual datasets, as well as extending this approach to other diseases and verbal tasks. Furthermore, as x-vectors shown to behave better with out-of-domain data, we also suggest replicating the experiments with in-the-wild data collected from online multimedia repositories (vlogs), and comparing the results to those obtained with data recorded in controlled conditions BIBREF34."
]
],
"section_name": [
"Introduction",
"Background - Speaker Embeddings",
"Experimental Setup",
"Experimental Setup ::: Corpora ::: Speaker Recognition - Portuguese (PT-EASR) corpus",
"Experimental Setup ::: Corpora ::: PD detection - Portuguese PD (PPD) corpus",
"Experimental Setup ::: Corpora ::: PD detection - Spanish PD (SPD) corpus",
"Experimental Setup ::: Corpora ::: OSA detection - PSD corpus",
"Experimental Setup ::: Knowledge-based features ::: Parkinson's disease",
"Experimental Setup ::: Knowledge-based features ::: Obstructive sleep apnea",
"Experimental Setup ::: Speaker representation models ::: i-vectors",
"Experimental Setup ::: Speaker representation models ::: x-vectors",
"Experimental Setup ::: Model training and parameters",
"Results",
"Results ::: Parkinson's disease - Portuguese corpus",
"Results ::: Obstructive sleep apnea",
"Results ::: Parkinson's disease: Spanish PD corpus",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"525c56017b810500e29a8fbce735eb3d2a2ad024",
"b948244d6e729b8b2312c709c51b5ff401ca1beb"
],
"answer": [
{
"evidence": [
"Until recently, i-vectors have been considered the state-of-the-art method for speaker recognition. An extension of the GMM Supervector, the i-vector approach models the variability present in the Supervector, as a low-rank total variability space. Using factor analysis, it is possible to extract low-dimensional total variability factors, called i-vectors, that provide a powerful and compact representation of speech segments BIBREF23, BIBREF25, BIBREF26. In their work, Hauptman et. al. BIBREF1 have noted that using i-vectors, that model the total variability space and total speaker variability, produces a representation that also includes information about speech disorders. To classify healthy and non-healthy speakers, the authors created a reference i-vector for the healthy population and another for the PD patients. Each speaker was then classified according to the distance between their i-vector to the reference i-vector of each class.",
"FLOAT SELECTED: TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"FLOAT SELECTED: TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS"
],
"extractive_spans": [],
"free_form_answer": "PD : i-vectors had segment level F1 score 66.6 and for speaker level had 75.6 F1 score\n\nOSA: For the same levels it had F1 scores of 65.5 and 75.0",
"highlighted_evidence": [
"Until recently, i-vectors have been considered the state-of-the-art method for speaker recognition. An extension of the GMM Supervector, the i-vector approach models the variability present in the Supervector, as a low-rank total variability space. Using factor analysis, it is possible to extract low-dimensional total variability factors, called i-vectors, that provide a powerful and compact representation of speech segments BIBREF23, BIBREF25, BIBREF26.",
"FLOAT SELECTED: TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"FLOAT SELECTED: TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This section contains the results obtained for all three tasks: PD detection with the PPD corpus, OSA detection with the PSD corpus and PD detection with the SPD corpus. Results are reported in terms of average Precision, Recall and F1 Score. The values highlighted in Tables TABREF19, TABREF21 and TABREF23 represent the best results, both at the speaker and segment levels.",
"FLOAT SELECTED: TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"FLOAT SELECTED: TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS",
"FLOAT SELECTED: TABLE VI RESULTS FOR THE SPANISH PD CORPUS"
],
"extractive_spans": [],
"free_form_answer": "State of the art F1 scores are:\nPPD: Seg 66.7, Spk 75.6\nOSA: Seg 73.3, Spk 81.7\nSPD: Seg 79.0, Spk 87.0",
"highlighted_evidence": [
"This section contains the results obtained for all three tasks: PD detection with the PPD corpus, OSA detection with the PSD corpus and PD detection with the SPD corpus. Results are reported in terms of average Precision, Recall and F1 Score.",
"FLOAT SELECTED: TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"FLOAT SELECTED: TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS",
"FLOAT SELECTED: TABLE VI RESULTS FOR THE SPANISH PD CORPUS"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0e6a955ea9ed0ae5230ae5eacf316c872ca7cb7c",
"1f0f92ddd63022ec64117a910f22b181a5125ff0",
"858cfe0b75fa826f2d2bb07dd9e0d784723188fd"
],
"answer": [
{
"evidence": [
"Table TABREF21 contains the results for OSA detection with the PSD corpus. For this task, x-vectors outperform all other approaches at the segment level, most importantly they significantly outperform KB features by $\\sim $8%, which further supports our hypothesis. Nevertheless, it is important to point out that both approaches perform similarly at the speaker level. Additionally, we can see that i-vectors perform worse than KB features. One possible justification, is the fact that the PSD corpus includes tasks - such as spontaneous speech - that do not match the read sentences included in the corpus used to train the i-vector and x-vector extractors. These tasks may thus be considered out-of-domain, which would explain why x-vectors are able to surpass the i-vector approach."
],
"extractive_spans": [],
"free_form_answer": "For OSA detection x-vectors outperform all other approaches at the segment level, most importantly they significantly outperform KB features by 8%.",
"highlighted_evidence": [
"Table TABREF21 contains the results for OSA detection with the PSD corpus. For this task, x-vectors outperform all other approaches at the segment level, most importantly they significantly outperform KB features by $\\sim $8%, which further supports our hypothesis.",
"x-vectors outperform all other approaches at the segment level, most importantly they significantly outperform KB features by"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"FLOAT SELECTED: TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS"
],
"extractive_spans": [],
"free_form_answer": "For Portuguese PD corpus, x-vector outperform KB for segment and speaker level for 2.2 and 2.2 F1 respectively.\nFor Portuguese OSA corpus, x-vector outperform KB for segment and speaker level for 8.5 and 0.1 F1 respectively.",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"FLOAT SELECTED: TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Results for PD classification with the PPD corpus are presented in Table TABREF19. The table shows that speaker representations learnt from out-of-domain data outperform KB features. This supports our hypothesis that speaker discriminative representations not only contain information about speech pathologies, but are also able to model symptoms of the disease that KB features fail to include. It is also possible to notice that x-vectors and i-vectors achieve very similar results, albeit x-vectors present a small improvement at the segment level, whereas i-vectors achieve slightly better results at the speaker level. A possible interpretation is the fact that, while x-vectors provide stronger representations for short segments, some works have shown that i-vectors may perform better when considering longer segments BIBREF8. As such, performing a majority vote weighted by the duration of speech segments may be giving an advantage to the i-vector approach at the speaker level.",
"Table TABREF23 presents the results achieved for the classification of SPD corpus. This experiment was designed to assess the suitability of x-vectors trained in one language and being applied to disease classification in a different language. Our results show that KB features outperform both speaker representations. This is most likely caused by the language mismatch between the Spanish PD corpus and the European Portuguese training corpus. Nonetheless, it should be noted that, as in the previous task, x-vectors are able to surpass i-vectors in an out-of-domain corpus.",
"FLOAT SELECTED: TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"FLOAT SELECTED: TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS"
],
"extractive_spans": [],
"free_form_answer": "Portuguese PD Corpus: for segment level i-vectors had better F1 score comparing to KB by 2.1% and for speaker level by 3.5%\nIn case of Spanish PD corpus, KB had higher F1 scores in terms of Segment level and Speaker level by 3.3% and 2.0%. ",
"highlighted_evidence": [
"Results for PD classification with the PPD corpus are presented in Table TABREF19. ",
"Table TABREF23 presents the results achieved for the classification of SPD corpus. ",
"FLOAT SELECTED: TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"FLOAT SELECTED: TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"1bd6b4214e4dc5619d31353c03ef125696971531"
],
"answer": [
{
"evidence": [
"Our experiments with the European Portuguese datasets support the hypothesis that discriminative speaker embeddings contain information relevant for disease detection. In particular, we found evidence that these embeddings contain information that KB features fail to represent, thus proving the validity of our approach. It was also observed that x-vectors are more suitable than i-vectors for tasks whose domain does not match that of the training data, such as verbal task mismatch and cross-lingual experiments. This indicates that x-vectors embeddings are a strong contender in the replacement of knowledge-based feature sets for PD and OSA detection."
],
"extractive_spans": [
"tasks whose domain does not match that of the training data"
],
"free_form_answer": "",
"highlighted_evidence": [
"It was also observed that x-vectors are more suitable than i-vectors for tasks whose domain does not match that of the training data, such as verbal task mismatch and cross-lingual experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"20770612127236b7f4e9fa258e18e360298993df",
"5764d382c0f0f27a4fab1cf50ab1ebea1e23353e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE I CORPORA DESCRIPTION."
],
"extractive_spans": [],
"free_form_answer": "For Portuguese PD have for patient 1.24h and for control 1.07 h.\nFor Portuguese OSA have for patient 1.10h and for control 1.05 h.\nFor Spanish PD have for patient 0.49h and for control 0.50h.",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE I CORPORA DESCRIPTION."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This corpus is a subset of the EASR (Elderly Automatic Speech Recognition) corpus BIBREF28. It includes recordings of European Portuguese read sentences. It was used to train the i-vector and the x-vector models, for speaker recognition tasks. This corpus includes speakers with ages ranging from 24 to 91, 91% of which in the age range of 60-80. This dataset was selected with the goal of generating speaker embeddings with strong discriminative power in this age range, as is characteristic of the diseases addressed in this work. The corpus was partitioned as 0.70:0.15:0.15 for training, development and test, respectively.",
"All utterances were split into 4 second-long segments using overlapping windows, with a shift of 2 seconds. Further details about each of these datasets can be found in Table TABREF8.",
"FLOAT SELECTED: TABLE I CORPORA DESCRIPTION."
],
"extractive_spans": [],
"free_form_answer": "15 percent of the corpora is used for testing. OSA contains 60 speakers, 3495 segments and PD 140 speakers and 3365 segments.",
"highlighted_evidence": [
"The corpus was partitioned as 0.70:0.15:0.15 for training, development and test, respectively.",
"Further details about each of these datasets can be found in Table TABREF8.",
"FLOAT SELECTED: TABLE I CORPORA DESCRIPTION."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What are state of the art results on OSA and PD corpora used for testing?",
"How better does x-vectors perform than knowlege-based features in same-language corpora?",
"What is meant by domain missmatch occuring?",
"How big are OSA and PD corporas used for testing?"
],
"question_id": [
"184382af8f58031c6e357dbee32c90ec95288cb3",
"97abc2e7b39869f660986b91fc68be4ba196805c",
"9ec0527bda2c302f4e82949cc0ae7f7769b7bfb8",
"330fe3815f74037a9be93a4c16610c736a2a27b3"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. X-vector network (adapted from [9]).",
"TABLE I CORPORA DESCRIPTION.",
"TABLE II X-vector NETWORK ARCHITECTURE",
"TABLE IV RESULTS FOR THE PORTUGUESE PD CORPUS",
"TABLE III SVM MODEL PARAMETERS",
"TABLE V RESULTS FOR THE PORTUGUESE OSA CORPUS",
"TABLE VI RESULTS FOR THE SPANISH PD CORPUS"
],
"file": [
"2-Figure1-1.png",
"3-TableI-1.png",
"3-TableII-1.png",
"4-TableIV-1.png",
"4-TableIII-1.png",
"4-TableV-1.png",
"4-TableVI-1.png"
]
} | [
"What are state of the art results on OSA and PD corpora used for testing?",
"How better does x-vectors perform than knowlege-based features in same-language corpora?",
"How big are OSA and PD corporas used for testing?"
] | [
[
"2003.00864-4-TableV-1.png",
"2003.00864-4-TableIV-1.png",
"2003.00864-Background - Speaker Embeddings-1",
"2003.00864-Results-0",
"2003.00864-4-TableVI-1.png"
],
[
"2003.00864-Results ::: Parkinson's disease: Spanish PD corpus-0",
"2003.00864-Results ::: Parkinson's disease - Portuguese corpus-0",
"2003.00864-Results ::: Obstructive sleep apnea-0",
"2003.00864-4-TableIV-1.png",
"2003.00864-4-TableV-1.png"
],
[
"2003.00864-Experimental Setup ::: Corpora ::: Speaker Recognition - Portuguese (PT-EASR) corpus-0",
"2003.00864-3-TableI-1.png",
"2003.00864-Experimental Setup ::: Corpora ::: OSA detection - PSD corpus-1"
]
] | [
"State of the art F1 scores are:\nPPD: Seg 66.7, Spk 75.6\nOSA: Seg 73.3, Spk 81.7\nSPD: Seg 79.0, Spk 87.0",
"Portuguese PD Corpus: for segment level i-vectors had better F1 score comparing to KB by 2.1% and for speaker level by 3.5%\nIn case of Spanish PD corpus, KB had higher F1 scores in terms of Segment level and Speaker level by 3.3% and 2.0%. ",
"15 percent of the corpora is used for testing. OSA contains 60 speakers, 3495 segments and PD 140 speakers and 3365 segments."
] | 306 |
1605.04278 | Universal Dependencies for Learner English | We introduce the Treebank of Learner English (TLE), the first publicly available syntactic treebank for English as a Second Language (ESL). The TLE provides manually annotated POS tags and Universal Dependency (UD) trees for 5,124 sentences from the Cambridge First Certificate in English (FCE) corpus. The UD annotations are tied to a pre-existing error annotation of the FCE, whereby full syntactic analyses are provided for both the original and error corrected versions of each sentence. Further on, we delineate ESL annotation guidelines that allow for consistent syntactic treatment of ungrammatical English. Finally, we benchmark POS tagging and dependency parsing performance on the TLE dataset and measure the effect of grammatical errors on parsing accuracy. We envision the treebank to support a wide range of linguistic and computational research on second language acquisition as well as automatic processing of ungrammatical language. The treebank is available at universaldependencies.org. The annotation manual used in this project and a graphical query engine are available at esltreebank.org. | {
"paragraphs": [
[
"The majority of the English text available worldwide is generated by non-native speakers BIBREF0 . Such texts introduce a variety of challenges, most notably grammatical errors, and are of paramount importance for the scientific study of language acquisition as well as for NLP. Despite the ubiquity of non-native English, there is currently no publicly available syntactic treebank for English as a Second Language (ESL).",
"To address this shortcoming, we present the Treebank of Learner English (TLE), a first of its kind resource for non-native English, containing 5,124 sentences manually annotated with POS tags and dependency trees. The TLE sentences are drawn from the FCE dataset BIBREF1 , and authored by English learners from 10 different native language backgrounds. The treebank uses the Universal Dependencies (UD) formalism BIBREF2 , BIBREF3 , which provides a unified annotation framework across different languages and is geared towards multilingual NLP BIBREF4 . This characteristic allows our treebank to support computational analysis of ESL using not only English based but also multilingual approaches which seek to relate ESL phenomena to native language syntax.",
"While the annotation inventory and guidelines are defined by the English UD formalism, we build on previous work in learner language analysis BIBREF5 , BIBREF6 to formulate an additional set of annotation conventions aiming at a uniform treatment of ungrammatical learner language. Our annotation scheme uses a two-layer analysis, whereby a distinct syntactic annotation is provided for the original and the corrected version of each sentence. This approach is enabled by a pre-existing error annotation of the FCE BIBREF7 which is used to generate an error corrected variant of the dataset. Our inter-annotator agreement results provide evidence for the ability of the annotation scheme to support consistent annotation of ungrammatical structures.",
"Finally, a corpus that is annotated with both grammatical errors and syntactic dependencies paves the way for empirical investigation of the relation between grammaticality and syntax. Understanding this relation is vital for improving tagging and parsing performance on learner language BIBREF8 , syntax based grammatical error correction BIBREF9 , BIBREF10 , and many other fundamental challenges in NLP. In this work, we take the first step in this direction by benchmarking tagging and parsing accuracy on our dataset under different training regimes, and obtaining several estimates for the impact of grammatical errors on these tasks.",
"To summarize, this paper presents three contributions. First, we introduce the first large scale syntactic treebank for ESL, manually annotated with POS tags and universal dependencies. Second, we describe a linguistically motivated annotation scheme for ungrammatical learner English and provide empirical support for its consistency via inter-annotator agreement analysis. Third, we benchmark a state of the art parser on our dataset and estimate the influence of grammatical errors on the accuracy of automatic POS tagging and dependency parsing.",
"The remainder of this paper is structured as follows. We start by presenting an overview of the treebank in section SECREF2 . In sections SECREF3 and SECREF4 we provide background information on the annotation project, and review the main annotation stages leading to the current form of the dataset. The ESL annotation guidelines are summarized in section SECREF5 . Inter-annotator agreement analysis is presented in section SECREF6 , followed by parsing experiments in section SECREF7 . Finally, we review related work in section SECREF8 and present the conclusion in section SECREF9 ."
],
[
"The TLE currently contains 5,124 sentences (97,681 tokens) with POS tag and dependency annotations in the English Universal Dependencies (UD) formalism BIBREF2 , BIBREF3 . The sentences were obtained from the FCE corpus BIBREF1 , a collection of upper intermediate English learner essays, containing error annotations with 75 error categories BIBREF7 . Sentence level segmentation was performed using an adaptation of the NLTK sentence tokenizer. Under-segmented sentences were split further manually. Word level tokenization was generated using the Stanford PTB word tokenizer.",
"The treebank represents learners with 10 different native language backgrounds: Chinese, French, German, Italian, Japanese, Korean, Portuguese, Spanish, Russian and Turkish. For every native language, we randomly sampled 500 automatically segmented sentences, under the constraint that selected sentences have to contain at least one grammatical error that is not punctuation or spelling.",
"The TLE annotations are provided in two versions. The first version is the original sentence authored by the learner, containing grammatical errors. The second, corrected sentence version, is a grammatical variant of the original sentence, generated by correcting all the grammatical errors in the sentence according to the manual error annotation provided in the FCE dataset. The resulting corrected sentences constitute a parallel corpus of standard English. Table TABREF4 presents basic statistics of both versions of the annotated sentences.",
"To avoid potential annotation biases, the annotations of the treebank were created manually from scratch, without utilizing any automatic annotation tools. To further assure annotation quality, each annotated sentence was reviewed by two additional annotators. To the best of our knowledge, TLE is the first large scale English treebank constructed in a completely manual fashion."
],
[
"The treebank was annotated by six students, five undergraduates and one graduate. Among the undergraduates, three are linguistics majors and two are engineering majors with a linguistic minor. The graduate student is a linguist specializing in syntax. An additional graduate student in NLP participated in the final debugging of the dataset.",
"Prior to annotating the treebank sentences, the annotators were trained for about 8 weeks. During the training, the annotators attended tutorials on dependency grammars, and learned the English UD guidelines, the Penn Treebank POS guidelines BIBREF11 , the grammatical error annotation scheme of the FCE BIBREF7 , as well as the ESL guidelines described in section SECREF5 and in the annotation manual.",
"Furthermore, the annotators completed six annotation exercises, in which they were required to annotate POS tags and dependencies for practice sentences from scratch. The exercises were done individually, and were followed by group meetings in which annotation disagreements were discussed and resolved. Each of the first three exercises consisted of 20 sentences from the UD gold standard for English, the English Web Treebank (EWT) BIBREF12 . The remaining three exercises contained 20-30 ESL sentences from the FCE. Many of the ESL guidelines were introduced or refined based on the disagreements in the ESL practice exercises and the subsequent group discussions. Several additional guidelines were introduced in the course of the annotation process.",
"During the training period, the annotators also learned to use a search tool that enables formulating queries over word and POS tag sequences as regular expressions and obtaining their annotation statistics in the EWT. After experimenting with both textual and graphical interfaces for performing the annotations, we converged on a simple text based format described in section SECREF6 , where the annotations were filled in using a spreadsheet or a text editor, and tested with a script for detecting annotation typos. The annotators continued to meet and discuss annotation issues on a weekly basis throughout the entire duration of the project."
],
[
"The formation of the treebank was carried out in four steps: annotation, review, disagreement resolution and targeted debugging."
],
[
"In the first stage, the annotators were given sentences for annotation from scratch. We use a CoNLL based textual template in which each word is annotated in a separate line. Each line contains 6 columns, the first of which has the word index (IND) and the second the word itself (WORD). The remaining four columns had to be filled in with a Universal POS tag (UPOS), a Penn Treebank POS tag (POS), a head word index (HIND) and a dependency relation (REL) according to version 1 of the English UD guidelines.",
"The annotation section of the sentence is preceded by a metadata header. The first field in this header, denoted with SENT, contains the FCE error coded version of the sentence. The annotators were instructed to verify the error annotation, and add new error annotations if needed. Corrections to the sentence segmentation are specified in the SEGMENT field. Further down, the field TYPO is designated for literal annotation of spelling errors and ill formed words that happen to form valid words (see section SECREF13 ).",
"The example below presents a pre-annotated original sentence given to an annotator.",
"#SENT=That time I had to sleep in <ns type= \"MD\"><c>a</c></ns> tent.",
"#SEGMENT=",
"#TYPO= *1.1cm*1.1cm*1.1cm*1.1cm#IND WORD UPOS POS HIND REL",
"1 That",
"2 time",
"3 I",
"4 had",
"5 to",
"6 sleep",
"7 in",
"8 tent",
"9 . ",
"Upon completion of the original sentence, the annotators proceeded to annotate the corrected sentence version. To reduce annotation time, annotators used a script that copies over annotations from the original sentence and updates head indices of tokens that appear in both sentence versions. Head indices and relation labels were filled in only if the head word of the token appeared in both the original and corrected sentence versions. Tokens with automatically filled annotations included an additional # sign in a seventh column of each word's annotation. The # signs had to be removed, and the corresponding annotations either approved or changed as appropriate. Tokens that did not appear in the original sentence version were annotated from scratch."
],
[
"All annotated sentences were randomly assigned to a second annotator (henceforth reviewer), in a double blind manner. The reviewer's task was to mark all the annotations that they would have annotated differently. To assist the review process, we compiled a list of common annotation errors, available in the released annotation manual.",
"The annotations were reviewed using an active editing scheme in which an explicit action was required for all the existing annotations. The scheme was introduced to prevent reviewers from overlooking annotation issues due to passive approval. Specifically, an additional # sign was added at the seventh column of each token's annotation. The reviewer then had to either “sign off” on the existing annotation by erasing the # sign, or provide an alternative annotation following the # sign."
],
[
"In the final stage of the annotation process all annotator-reviewer disagreements were resolved by a third annotator (henceforth judge), whose main task was to decide in favor of the annotator or the reviewer. Similarly to the review process, the judging task was carried out in a double blind manner. Judges were allowed to resolve annotator-reviewer disagreements with a third alternative, as well as introduce new corrections for annotation issues overlooked by the reviewers.",
"Another task performed by the judges was to mark acceptable alternative annotations for ambiguous structures determined through review disagreements or otherwise present in the sentence. These annotations were specified in an additional metadata field called AMBIGUITY. The ambiguity markings are provided along with the resolved version of the annotations."
],
[
"After applying the resolutions produced by the judges, we queried the corpus with debugging tests for specific linguistics constructions. This additional testing phase further reduced the number of annotation errors and inconsistencies in the treebank. Including the training period, the treebank creation lasted over a year, with an aggregate of more than 2,000 annotation hours."
],
[
"Our annotations use the existing inventory of English UD POS tags and dependency relations, and follow the standard UD annotation guidelines for English. However, these guidelines were formulated with grammatical usage of English in mind and do not cover non canonical syntactic structures arising due to grammatical errors. To encourage consistent and linguistically motivated annotation of such structures, we formulated a complementary set of ESL annotation guidelines.",
"Our ESL annotation guidelines follow the general principle of literal reading, which emphasizes syntactic analysis according to the observed language usage. This strategy continues a line of work in SLA which advocates for centering analysis of learner language around morpho-syntactic surface evidence BIBREF13 , BIBREF6 . Similarly to our framework, which includes a parallel annotation of corrected sentences, such strategies are often presented in the context of multi-layer annotation schemes that also account for error corrected sentence forms BIBREF14 , BIBREF5 , BIBREF15 .",
"Deploying a strategy of literal annotation within UD, a formalism which enforces cross-linguistic consistency of annotations, will enable meaningful comparisons between non-canonical structures in English and canonical structures in the author's native language. As a result, a key novel characteristic of our treebank is its ability to support cross-lingual studies of learner language."
],
[
"With respect to POS tagging, literal annotation implies adhering as much as possible to the observed morphological forms of the words. Syntactically, argument structure is annotated according to the usage of the word rather than its typical distribution in the relevant context. The following list of conventions defines the notion of literal reading for some of the common non canonical structures associated with grammatical errors.",
"Extraneous prepositions We annotate all nominal dependents introduced by extraneous prepositions as nominal modifiers. In the following sentence, “him” is marked as a nominal modifier (nmod) instead of an indirect object (iobj) of “give”.",
"#SENT=...I had to give <ns type=\"UT\"><i>to</i> </ns> him water... *1.5cm*1.3cm*1.1cm*1.1cm...",
"21 I PRON PRP 22 nsubj",
"22 had VERB VBD 5 parataxis",
"23 to PART TO 24 mark",
"24 give VERB VB 22 xcomp",
"25 to ADP IN 26 case",
"26 himPRON PRP 24 nmod",
"27 water NOUN NN 24 dobj",
"... ",
"Omitted prepositions We treat nominal dependents of a predicate that are lacking a preposition as arguments rather than nominal modifiers. In the example below, “money” is marked as a direct object (dobj) instead of a nominal modifier (nmod) of “ask”. As “you” functions in this context as a second argument of “ask”, it is annotated as an indirect object (iobj) instead of a direct object (dobj).",
"#SENT=...I have to ask you <ns type=\"MT\"> <c>for</c></ns> the money <ns type= \"RT\"> <i>of</i><c>for</c></ns> the tickets back. *1.5cm*1.3cm*1.1cm*1.1cm...",
"12 I PRON PRP 13 nsubj",
"13 have VERB VBP 2 conj",
"14 to PART TO 15 mark",
"15 ask VERB VB 13 xcomp",
"16 you PRON PRP 15 iobj",
"17 the DET DT 18 det",
"18 money NOUN NN 15 dobj",
"19 of ADP IN 21 case",
"20 the DET DT 21 det",
"21 ticketsNOUN NNS 18 nmod",
"22 back ADV RB 15 advmod",
"23 . PUNCT . 2 punct ",
"Cases of erroneous tense usage are annotated according to the morphological tense of the verb. For example, below we annotate “shopping” with present participle VBG, while the correction “shop” is annotated in the corrected version of the sentence as VBP.",
"#SENT=...when you <ns type=\"TV\"><i>shopping</i> <c>shop</c></ns>... *1.5cm*1.3cm*1.1cm*1.1cm...",
"4 when ADV WRB 6 advmod",
"5 you PRON PRP 6 nsubj",
"6 shopping VERB VBG 12 advcl",
"... ",
"Erroneous word formations that are contextually plausible and can be assigned with a PTB tag are annotated literally. In the following example, “stuffs” is handled as a plural count noun.",
"#SENT=...into fashionable <ns type=\"CN\"> <i>stuffs</i><c>stuff</c></ns>... *1.8cm*1.3cm*1.1cm*1.1cm...",
"7 into ADP IN 9 case",
"8 fashionable ADJ JJ 9 amod",
"9 stuffs NOUN NNS 2 ccomp",
"... ",
"Similarly, in the example below we annotate “necessaryiest” as a superlative.",
"#SENT=The necessaryiest things... *2.1cm*1.3cm*1.1cm*1.1cm1 The DET DT 3 det",
"2 necessaryiest ADJ JJS 3 amod",
"3 things NOUN NNS 0 root",
"... "
],
[
"Although our general annotation strategy for ESL follows literal sentence readings, several types of word formation errors make such readings uninformative or impossible, essentially forcing certain words to be annotated using some degree of interpretation BIBREF16 . We hence annotate the following cases in the original sentence according to an interpretation of an intended word meaning, obtained from the FCE error correction.",
"Spelling errors are annotated according to the correctly spelled version of the word. To support error analysis of automatic annotation tools, misspelled words that happen to form valid words are annotated in the metadata field TYPO for POS tags with respect to the most common usage of the misspelled word form. In the example below, the TYPO field contains the typical POS annotation of “where”, which is clearly unintended in the context of the sentence.",
"#SENT=...we <ns type=\"SX\"><i>where</i> <c>were</c></ns> invited to visit...",
"#TYPO=5 ADV WRB *1.5cm*1.3cm*1.1cm*1.1cm...",
"4 we PRON PRP 6 nsubjpass",
"5 where AUX VBD 6 auxpass",
"6 invited VERB VBN 0 root",
"7 to PART TO 8 mark",
"8 visit VERB VB 6 xcomp",
"... ",
"Erroneous word formations that cannot be assigned with an existing PTB tag are annotated with respect to the correct word form.",
"#SENT=I am <ns type=\"IV\"><i>writting</i> <c>writing</c></ns>... *1.5cm*1.3cm*1.1cm*1.1cm1 I PRON PRP 3 nsubj",
"2 am AUX VBP 3 aux",
"3 writting VERB VBG 0 root",
"... ",
"In particular, ill formed adjectives that have a plural suffix receive a standard adjectival POS tag. When applicable, such cases also receive an additional marking for unnecessary agreement in the error annotation using the attribute “ua”.",
"#SENT=...<ns type=\"IJ\" ua=true> <i>interestings</i><c>interesting</c></ns> things... *1.5cm*1.3cm*1.1cm*1.1cm...",
"6 interestings ADJ JJ 7 amod",
"7 things NOUN NNS 3 dobj",
"... ",
"Wrong word formations that result in a valid, but contextually implausible word form are also annotated according to the word correction. In the example below, the nominal form “sale” is likely to be an unintended result of an ill formed verb. Similarly to spelling errors that result in valid words, we mark the typical literal POS annotation in the TYPO metadata field.",
"#SENT=...they do not <ns type=\"DV\"><i>sale</i> <c>sell</c></ns> them...",
"#TYPO=15 NOUN NN *1.5cm*1.3cm*1.1cm*1.1cm...",
"12 they PRON PRP 15 nsubj",
"13 do AUX VBP 15 aux",
"14 not PART RB 15 neg",
"15 sale VERB VB 0 root",
"16 them PRON PRP 15 dobj",
"... ",
"Taken together, our ESL conventions cover many of the annotation challenges related to grammatical errors present in the TLE. In addition to the presented overview, the complete manual of ESL guidelines used by the annotators is publicly available. The manual contains further details on our annotation scheme, additional annotation guidelines and a list of common annotation errors. We plan to extend and refine these guidelines in future releases of the treebank."
],
[
"We utilize our two step review process to estimate agreement rates between annotators. We measure agreement as the fraction of annotation tokens approved by the editor. Table TABREF15 presents the agreement between annotators and reviewers, as well as the agreement between reviewers and the judges. Agreement measurements are provided for both the original the corrected versions of the dataset.",
"Overall, the results indicate a high agreement rate in the two editing tasks. Importantly, the gap between the agreement on the original and corrected sentences is small. Note that this result is obtained despite the introduction of several ESL annotation guidelines in the course of the annotation process, which inevitably increased the number of edits related to grammatical errors. We interpret this outcome as evidence for the effectiveness of the ESL annotation scheme in supporting consistent annotations of learner language."
],
[
"The TLE enables studying parsing for learner language and exploring relationships between grammatical errors and parsing performance. Here, we present parsing benchmarks on our dataset, and provide several estimates for the extent to which grammatical errors degrade the quality of automatic POS tagging and dependency parsing.",
"Our first experiment measures tagging and parsing accuracy on the TLE and approximates the global impact of grammatical errors on automatic annotation via performance comparison between the original and error corrected sentence versions. In this, and subsequent experiments, we utilize version 2.2 of the Turbo tagger and Turbo parser BIBREF18 , state of the art tools for statistical POS tagging and dependency parsing.",
"Table TABREF16 presents tagging and parsing results on a test set of 500 TLE sentences (9,591 original tokens, 9,700 corrected tokens). Results are provided for three different training regimes. The first regime uses the training portion of version 1.3 of the EWT, the UD English treebank, containing 12,543 sentences (204,586 tokens). The second training mode uses 4,124 training sentences (78,541 original tokens, 79,581 corrected tokens) from the TLE corpus. In the third setup we combine these two training corpora. The remaining 500 TLE sentences (9,549 original tokens, 9,695 corrected tokens) are allocated to a development set, not used in this experiment. Parsing of the test sentences was performed on predicted POS tags.",
"The EWT training regime, which uses out of domain texts written in standard English, provides the lowest performance on all the evaluation metrics. An additional factor which negatively affects performance in this regime are systematic differences in the EWT annotation of possessive pronouns, expletives and names compared to the UD guidelines, which are utilized in the TLE. In particular, the EWT annotates possessive pronoun UPOS as PRON rather than DET, which leads the UPOS results in this setup to be lower than the PTB POS results. Improved results are obtained using the TLE training data, which, despite its smaller size, is closer in genre and syntactic characteristics to the TLE test set. The strongest PTB POS tagging and parsing results are obtained by combining the EWT with the TLE training data, yielding 95.77 POS accuracy and a UAS of 90.3 on the original version of the TLE test set.",
"The dual annotation of sentences in their original and error corrected forms enables estimating the impact of grammatical errors on tagging and parsing by examining the performance gaps between the two sentence versions. Averaged across the three training conditions, the POS tagging accuracy on the original sentences is lower than the accuracy on the sentence corrections by 1.0 UPOS and 0.61 POS. Parsing performance degrades by 1.9 UAS, 1.59 LA and 2.21 LAS.",
"To further elucidate the influence of grammatical errors on parsing quality, table TABREF17 compares performance on tokens in the original sentences appearing inside grammatical error tags to those appearing outside such tags. Although grammatical errors may lead to tagging and parsing errors with respect to any element in the sentence, we expect erroneous tokens to be more challenging to analyze compared to grammatical tokens.",
"This comparison indeed reveals a substantial difference between the two types of tokens, with an average gap of 5.0 UPOS, 6.65 POS, 4.67 UAS, 6.56 LA and 7.39 LAS. Note that differently from the global measurements in the first experiment, this analysis, which focuses on the local impact of remove/replace errors, suggests a stronger effect of grammatical errors on the dependency labels than on the dependency structure.",
"Finally, we measure tagging and parsing performance relative to the fraction of sentence tokens marked with grammatical errors. Similarly to the previous experiment, this analysis focuses on remove/replace rather than insert errors.",
"Figure 1 presents the average sentential performance as a function of the percentage of tokens in the original sentence marked with grammatical errors. In this experiment, we train the parser on the EWT training set and test on the entire TLE corpus. Performance curves are presented for POS, UAS and LAS on the original and error corrected versions of the annotations. We observe that while the performance on the corrected sentences is close to constant, original sentence performance is decreasing as the percentage of the erroneous tokens in the sentence grows.",
"Overall, our results suggest a negative, albeit limited effect of grammatical errors on parsing. This outcome contrasts a study by Geertzen et al. geertzen2013 which reported a larger performance gap of 7.6 UAS and 8.8 LAS between sentences with and without grammatical errors. We believe that our analysis provides a more accurate estimate of this impact, as it controls for both sentence content and sentence length. The latter factor is crucial, since it correlates positively with the number of grammatical errors in the sentence, and negatively with parsing accuracy."
],
[
"Previous studies on learner language proposed several annotation schemes for both POS tags and syntax BIBREF14 , BIBREF5 , BIBREF6 , BIBREF15 . The unifying theme in these proposals is a multi-layered analysis aiming to decouple the observed language usage from conventional structures in the foreign language.",
"In the context of ESL, Dıaz et al. diaz2010towards propose three parallel POS tag annotations for the lexical, morphological and distributional forms of each word. In our work, we adopt the distinction between morphological word forms, which roughly correspond to our literal word readings, and distributional forms as the error corrected words. However, we account for morphological forms only when these constitute valid existing PTB POS tags and are contextually plausible. Furthermore, while the internal structure of invalid word forms is an interesting object of investigation, we believe that it is more suitable for annotation as word features rather than POS tags. Our treebank supports the addition of such features to the existing annotations.",
"The work of Ragheb and Dickinson dickinson2009dependency,ragheb2012defining,ragheb2013 proposes ESL annotation guidelines for POS tags and syntactic dependencies based on the CHILDES annotation framework. This approach, called “morphosyntactic dependencies” is related to our annotation scheme in its focus on surface structures. Differently from this proposal, our annotations are grounded in a parallel annotation of grammatical errors and include an additional layer of analysis for the corrected forms. Moreover, we refrain from introducing new syntactic categories and dependency relations specific to ESL, thereby supporting computational treatment of ESL using existing resources for standard English. At the same time, we utilize a multilingual formalism which, in conjunction with our literal annotation strategy, facilitates linking the annotations to native language syntax.",
"While the above mentioned studies focus on annotation guidelines, attention has also been drawn to the topic of parsing in the learner language domain. However, due to the shortage of syntactic resources for ESL, much of the work in this area resorted to using surrogates for learner data. For example, in Foster foster2007treebanks and Foster et al. foster2008 parsing experiments are carried out on synthetic learner-like data, that was created by automatic insertion of grammatical errors to well formed English text. In Cahill et al. cahill2014 a treebank of secondary level native students texts was used to approximate learner text in order to evaluate a parser that utilizes unlabeled learner data.",
"Syntactic annotations for ESL were previously developed by Nagata et al. nagata2011, who annotate an English learner corpus with POS tags and shallow syntactic parses. Our work departs from shallow syntax to full syntactic analysis, and provides annotations on a significantly larger scale. Furthermore, differently from this annotation effort, our treebank covers a wide range of learner native languages. An additional syntactic dataset for ESL, currently not available publicly, are 1,000 sentences from the EFCamDat dataset BIBREF8 , annotated with Stanford dependencies BIBREF19 . This dataset was used to measure the impact of grammatical errors on parsing by comparing performance on sentences with grammatical errors to error free sentences. The TLE enables a more direct way of estimating the magnitude of this performance gap by comparing performance on the same sentences in their original and error corrected versions. Our comparison suggests that the effect of grammatical errors on parsing is smaller that the one reported in this study."
],
[
"We present the first large scale treebank of learner language, manually annotated and double-reviewed for POS tags and universal dependencies. The annotation is accompanied by a linguistically motivated framework for handling syntactic structures associated with grammatical errors. Finally, we benchmark automatic tagging and parsing on our corpus, and measure the effect of grammatical errors on tagging and parsing quality. The treebank will support empirical study of learner syntax in NLP, corpus linguistics and second language acquisition."
],
[
"We thank Anna Korhonen for helpful discussions and insightful comments on this paper. We also thank Dora Alexopoulou, Andrei Barbu, Markus Dickinson, Sue Felshin, Jeroen Geertzen, Yan Huang, Detmar Meurers, Sampo Pyysalo, Roi Reichart and the anonymous reviewers for valuable feedback on this work. This material is based upon work supported by the Center for Brains, Minds, and Machines (CBMM), funded by NSF STC award CCF-1231216."
]
],
"section_name": [
"Introduction",
"Treebank Overview",
"Annotator Training",
"Annotation Procedure",
"Annotation",
"Review",
"Disagreement Resolution",
"Final Debugging",
"Annotation Scheme for ESL",
"Literal Annotation",
"Exceptions to Literal Annotation",
"Editing Agreement",
"Parsing Experiments",
"Related Work",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"136bfebf062db72907a626eba77c21dec0f64796",
"7f3f4bce4bdf0247b940cf8894f2a6c6cf6cb636"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Finally, a corpus that is annotated with both grammatical errors and syntactic dependencies paves the way for empirical investigation of the relation between grammaticality and syntax. Understanding this relation is vital for improving tagging and parsing performance on learner language BIBREF8 , syntax based grammatical error correction BIBREF9 , BIBREF10 , and many other fundamental challenges in NLP. In this work, we take the first step in this direction by benchmarking tagging and parsing accuracy on our dataset under different training regimes, and obtaining several estimates for the impact of grammatical errors on these tasks."
],
"extractive_spans": [],
"free_form_answer": "It will improve tagging and parsing performance, syntax based grammatical error correction.",
"highlighted_evidence": [
"Finally, a corpus that is annotated with both grammatical errors and syntactic dependencies paves the way for empirical investigation of the relation between grammaticality and syntax. Understanding this relation is vital for improving tagging and parsing performance on learner language BIBREF8 , syntax based grammatical error correction BIBREF9 , BIBREF10 , and many other fundamental challenges in NLP. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"11891bd1343458b8789557c20212f44111c982bc",
"fd9afa2726836f7e1eb3097f4e2068fe3704af7d"
],
"answer": [
{
"evidence": [
"Our first experiment measures tagging and parsing accuracy on the TLE and approximates the global impact of grammatical errors on automatic annotation via performance comparison between the original and error corrected sentence versions. In this, and subsequent experiments, we utilize version 2.2 of the Turbo tagger and Turbo parser BIBREF18 , state of the art tools for statistical POS tagging and dependency parsing."
],
"extractive_spans": [
"version 2.2 of the Turbo tagger and Turbo parser BIBREF18"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this, and subsequent experiments, we utilize version 2.2 of the Turbo tagger and Turbo parser BIBREF18 , state of the art tools for statistical POS tagging and dependency parsing."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our first experiment measures tagging and parsing accuracy on the TLE and approximates the global impact of grammatical errors on automatic annotation via performance comparison between the original and error corrected sentence versions. In this, and subsequent experiments, we utilize version 2.2 of the Turbo tagger and Turbo parser BIBREF18 , state of the art tools for statistical POS tagging and dependency parsing."
],
"extractive_spans": [
"Turbo tagger",
"Turbo parser"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this, and subsequent experiments, we utilize version 2.2 of the Turbo tagger and Turbo parser BIBREF18 , state of the art tools for statistical POS tagging and dependency parsing."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"68ea739689b157d4db290bd49785140af06a2cac",
"883af99a1389180174dbf0d0aba5ee29b3d5bbfd"
],
"answer": [
{
"evidence": [
"The TLE currently contains 5,124 sentences (97,681 tokens) with POS tag and dependency annotations in the English Universal Dependencies (UD) formalism BIBREF2 , BIBREF3 . The sentences were obtained from the FCE corpus BIBREF1 , a collection of upper intermediate English learner essays, containing error annotations with 75 error categories BIBREF7 . Sentence level segmentation was performed using an adaptation of the NLTK sentence tokenizer. Under-segmented sentences were split further manually. Word level tokenization was generated using the Stanford PTB word tokenizer."
],
"extractive_spans": [],
"free_form_answer": "5124",
"highlighted_evidence": [
"The TLE currently contains 5,124 sentences (97,681 tokens) with POS tag and dependency annotations in the English Universal Dependencies (UD) formalism BIBREF2 , BIBREF3 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The TLE currently contains 5,124 sentences (97,681 tokens) with POS tag and dependency annotations in the English Universal Dependencies (UD) formalism BIBREF2 , BIBREF3 . The sentences were obtained from the FCE corpus BIBREF1 , a collection of upper intermediate English learner essays, containing error annotations with 75 error categories BIBREF7 . Sentence level segmentation was performed using an adaptation of the NLTK sentence tokenizer. Under-segmented sentences were split further manually. Word level tokenization was generated using the Stanford PTB word tokenizer."
],
"extractive_spans": [
" 5,124 sentences (97,681 tokens)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The TLE currently contains 5,124 sentences (97,681 tokens) with POS tag and dependency annotations in the English Universal Dependencies (UD) formalism BIBREF2 , BIBREF3 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0eb812c73d44a075385d2d7b858e78759bb5eeaf",
"10ab739e729467936c3b3ef4e65b0fa165ac90d4"
],
"answer": [
{
"evidence": [
"The treebank was annotated by six students, five undergraduates and one graduate. Among the undergraduates, three are linguistics majors and two are engineering majors with a linguistic minor. The graduate student is a linguist specializing in syntax. An additional graduate student in NLP participated in the final debugging of the dataset."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The treebank was annotated by six students, five undergraduates and one graduate."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"The treebank was annotated by six students, five undergraduates and one graduate. Among the undergraduates, three are linguistics majors and two are engineering majors with a linguistic minor. The graduate student is a linguist specializing in syntax. An additional graduate student in NLP participated in the final debugging of the dataset."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The treebank was annotated by six students, five undergraduates and one graduate. Among the undergraduates, three are linguistics majors and two are engineering majors with a linguistic minor. The graduate student is a linguist specializing in syntax. An additional graduate student in NLP participated in the final debugging of the dataset."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How do they think this treebank will support research on second language acquisition?",
"What are their baseline models?",
"How long is the dataset?",
"Did they use crowdsourcing to annotate the dataset?"
],
"question_id": [
"7546125f43eec5b09a3368c95019cb2bf1478255",
"e96b0d64c8d9fdd90235c499bf1ec562d2cbb8b2",
"576a3ed6e4faa4c3893db632e97a52ac6e864aac",
"73c535a7b46f0c2408ea2b1da0a878b376a2bca5"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Statistics of the TLE. Standard deviations are denoted in parenthesis.",
"Table 2: Inter-annotator agreement on the entire TLE corpus. Agreement is measured as the fraction of tokens that remain unchanged after an editing round. The four evaluation columns correspond to universal POS tags, PTB POS tags, unlabeled attachment, and dependency labels. Cohen’s Kappa scores (Cohen, 1960) for POS tags and dependency labels in all evaluation conditions are above 0.96.",
"Table 3: Tagging and parsing results on a test set of 500 sentences from the TLE corpus. UPOS and POS are accuracies on universal and PTB tags. UAS is unlabeled attachment score, LA is dependency label accuracy, and LAS is labeled attachment score. EWT is the English UD treebank. TLEorig are original sentences from the TLE. TLEcorr are the corresponding error corrected sentences.",
"Table 4: Tagging and parsing results on the original version of the TLE test set for tokens marked with grammatical errors (Ungrammatical) and tokens not marked for errors (Grammatical).",
"Figure 1: Mean per sentence POS accuracy, UAS and LAS of the Turbo parser, as a function of the percentage of original sentence tokens marked with grammatical errors. The parser is trained on the EWT corpus, and tested on all 5,124 sentences of the TLE. Points connected by continuous lines denote performance on original TLE sentences. Points connected by dashed lines denote performance on the corresponding error corrected sentences. The number of sentences whose errors fall within each percentage range appears below the range in parenthesis."
],
"file": [
"2-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"7-Figure1-1.png"
]
} | [
"How do they think this treebank will support research on second language acquisition?",
"How long is the dataset?"
] | [
[
"1605.04278-Introduction-3"
],
[
"1605.04278-Treebank Overview-0"
]
] | [
"It will improve tagging and parsing performance, syntax based grammatical error correction.",
"5124"
] | 307 |
1908.09590 | Rethinking Attribute Representation and Injection for Sentiment Classification | Text attributes, such as user and product information in product reviews, have been used to improve the performance of sentiment classification models. The de facto standard method is to incorporate them as additional biases in the attention mechanism, and more performance gains are achieved by extending the model architecture. In this paper, we show that the above method is the least effective way to represent and inject attributes. To demonstrate this hypothesis, unlike previous models with complicated architectures, we limit our base model to a simple BiLSTM with attention classifier, and instead focus on how and where the attributes should be incorporated in the model. We propose to represent attributes as chunk-wise importance weight matrices and consider four locations in the model (i.e., embedding, encoding, attention, classifier) to inject attributes. Experiments show that our proposed method achieves significant improvements over the standard approach and that attention mechanism is the worst location to inject attributes, contradicting prior work. We also outperform the state-of-the-art despite our use of a simple base model. Finally, we show that these representations transfer well to other tasks. Model implementation and datasets are released here: this https URL. | {
"paragraphs": [
[
"The use of categorical attributes (e.g., user, topic, aspects) in the sentiment analysis community BIBREF0, BIBREF1, BIBREF2 is widespread. Prior to the deep learning era, these information were used as effective categorical features BIBREF3, BIBREF4, BIBREF5, BIBREF6 for the machine learning model. Recent work has used them to improve the overall performance BIBREF7, BIBREF8, interpretability BIBREF9, BIBREF10, and personalization BIBREF11 of neural network models in different tasks such as sentiment classification BIBREF12, review summarization BIBREF13, and text generation BIBREF8.",
"In particular, user and product information have been widely incorporated in sentiment classification models, especially since they are important metadata attributes found in review websites. BIBREF12 first showed significant accuracy increase of neural models when these information are used. Currently, the accepted standard method is to use them as additional biases when computing the weights $a$ in the attention mechanism, as introduced by BIBREF7 as:",
"where $u$ and $p$ are the user and product embeddings, and $h$ is a word encoding from BiLSTM. Since then, most of the subsequent work attempted to improve the model by extending the model architecture to be able to utilize external features BIBREF14, handle cold-start entities BIBREF9, and represent user and product separately BIBREF15.",
"Intuitively, however, this method is not the ideal method to represent and inject attributes because of two reasons. First, representing attributes as additional biases cannot model the relationship between the text and attributes. Rather, it only adds a user- and product-specific biases that are independent from the text when calculating the attention weights. Second, injecting the attributes in the attention mechanism means that user and product information are only used to customize how the model choose which words to focus on, as also shown empirically in previous work BIBREF7, BIBREF15. However, we argue that there are more intuitive locations to inject the attributes such as when contextualizing words to modify their sentiment intensity.",
"We propose to represent user and product information as weight matrices (i.e., $W$ in the equation above). Directly incorporating these attributes into $W$ leads to large increase in parameters and subsequently makes the model difficult to optimize. To mitigate these problems, we introduce chunk-wise importance weight matrices, which (1) uses a weight matrix smaller than $W$ by a chunk size factor, and (2) transforms these matrix into gates such that it corresponds to the relative importance of each neuron in $W$. We investigate the use of this method when injected to several locations in the base model: word embeddings, BiLSTM encoder, attention mechanism, and logistic classifier.",
"The results of our experiments can be summarized in three statements. First, our preliminary experiments show that doing bias-based attribute representation and attention-based injection is not an effective method to incorporate user and product information in sentiment classification models. Second, despite using only a simple BiLSTM with attention classifier, we significantly outperform previous state-of-the-art models that use more complicated architectures (e.g., models that use hierarchical models, external memory networks, etc.). Finally, we show that these attribute representations transfer well to other tasks such as product category classification and review headline generation."
],
[
"In this section, we explore different ways on how to represent attributes and where in the model can we inject them."
],
[
"The majority of this paper uses a base model that accepts a review $\\mathbf {x}=x_1,...,x_n$ as input and returns a sentiment $y$ as output, which we extend to also accept the corresponding user $u$ and product $p$ attributes as additional inputs. Different from previous work where models use complex architectures such as hierarchical LSTMs BIBREF7, BIBREF14 and external memory networks BIBREF16, BIBREF17, we aim to achieve improvements by only modifying how we represent and inject attributes. Thus, we use a simple classifier as our base model, which consists of four parts explained briefly as follows.",
"First, we embed $\\mathbf {x}$ using a word embedding matrix that returns word embeddings $x^{\\prime }_1,...,x^{\\prime }_n$. We subsequently apply a non-linear function to each word:",
"Second, we run a bidirectional LSTM BIBREF18 encoder to contextualize the words into $h_t=[\\overrightarrow{h}_t;\\overleftarrow{h}_t]$ based on their forward and backward neighbors. The forward and backward LSTM look similar, thus for brevity we only show the forward LSTM below:",
"Third, we pool the encodings $h_t$ into one document encoding $d$ using attention mechanism BIBREF19, where $v$ is a latent representation of informativeness BIBREF20:",
"Finally, we classify the document using a logistic classifier to get a predicted $y^{\\prime }$:",
"Training is done normally by minimizing the cross entropy loss."
],
[
"Note that at each part of the model, we see similar non-linear functions, all using the same form, i.e. $g(f(x)) = g(Wx + b)$, where $f(x)$ is an affine transformation function of $x$, $g$ is a non-linear activation, $W$ and $b$ are weight matrix and bias parameters, respectively. Without extending the base model architecture, we can represent the attributes either as the weight matrix $W$ or as the bias $b$ to one of these functions by modifying them to accept $u$ and $p$ as inputs, i.e. $f(x,u,p)$."
],
[
"The current accepted standard approach to represent the attributes is through the bias parameter $b$. Most of the previous work BIBREF7, BIBREF14, BIBREF9, BIBREF21 use Equation DISPLAY_FORM2 in the attention mechanism, which basically updates the original bias $b$ to $b^{\\prime } = W_u u + W_p p + b$. However, we argue that this is not the ideal way to incorporate attributes since it means we only add a user- and product-specific bias towards the goal of the function, without looking at the text. Figure FIGREF9 shows an intuitive example: When we represent user $u$ as a bias in the logistic classifier, in which it means that $u$ has a biased logits vector $b_u$ of classifying the text as a certain sentiment (e.g., $u$ tends to classify texts as three-star positive), shifting the final probability distribution regardless of what the text content may have been."
],
[
"A more intuitive way of representing attributes is through the weight matrix $W$. Specifically, given the attribute embeddings $u$ and $p$, we linearly transform their concatenation into a vector $w^{\\prime }$ of size $D_1*D_2$ where $D_1$ and $D_2$ are the dimensions of $W$. We then reshape $w^{\\prime }$ into $W^{\\prime }$ to get the same shape as $W$ and replace $W$ with $W^{\\prime }$:",
"Theoretically, this should perform better than bias-based representations since direct relationship between text and attributes are modeled. For example, following the example above, $W^{\\prime }x$ is a user-biased logits vector based on the document encoding $d$ (e.g., $u$ tends to classify texts as two-star positive when the text mentions that the dessert was sweet).",
"However, the model is burdened by a large number of parameters; matrix-based attribute representation increases the number of parameters by $|U|*|P|*D_1*D_2$, where $|U|$ and $|P|$ correspond to the number of users and products, respectively. This subsequently makes the weights difficult to optimize during training. Thus, directly incorporating attributes into the weight matrix may cause harm in the performance of the model."
],
[
"We introduce Chunk-wise Importance Matrix (CHIM) based representation, which improves over the matrix-based approach by mitigating the optimization problems mentioned above, using the following two tricks. First, instead of using a big weight matrix $W^{\\prime }$ of shape $(D_1, D_2)$, we use a chunked weight matrix $C$ of shape $(D_1/C_1, D_2/C_2)$ where $C_1$ and $C_2$ are chunk size factors. Second, we use the chunked weight matrix as importance gates that shrinks the weights close to zero when they are deemed unimportant. We show the CHIM-based representation method in Figure FIGREF16.",
"We start by linearly transforming the concatenated attributes into $c$. Then we reshape $c$ into $C$ with shape $(D_1/C_1, D_2/C_2)$. These operations are similar to Equations DISPLAY_FORM14 and . We then repeat this matrix $C_1*C_2$ times and concatenate them such that we create a matrix $W^{\\prime }$ of shape $(D_1, D_2)$. Finally, we use the sigmoid function $\\sigma $ to transform the matrix into gates that represent importance:",
"Finally we broadcast-multiply $W^{\\prime }$ with the original weight matrix $W$ to shrink the weights. The result is a sparse version of $W$, which can be seen as either a regularization step BIBREF22 where most weights are set close to zero, or a correction step BIBREF23 where the important gates are used to correct the weights. The use of multiple chunks regards CHIM as coarse-grained access control BIBREF24 where the use of different important gates for every node is unnecessary and expensive. The final function is shown below:",
"To summarize, chunking helps reduce the number of parameters while retaining the model performance, and importance matrix makes optimization easier during training, resulting to a performance improvement. We also tried alternative methods for importance matrix such as residual addition (i.e., $\\tanh (W^{\\prime }) + W$) introduced in BIBREF25, and low-rank adaptation methods BIBREF26, BIBREF27, but these did not improve the model performance."
],
[
"Using the approaches described above, we can inject attribute representation into four different parts of the model. This section describes what it means to inject attributes to a certain location and why previous work have been injecting them in the worst location (i.e., in the attention mechanism)."
],
[
"Injecting attributes to the attention mechanism means that we bias the selection of more informative words during pooling. For example, in Figure FIGREF9, a user may find delicious drinks to be the most important aspect in a restaurant. Injection in the attention mechanism would bias the selection of words such as wine, smooth, and sweet to create the document encoding. This is the standard location in the model to inject the attributes, and several BIBREF7, BIBREF9 have shown how the injected attention mechanism selects different words when the given user or product is different.",
"We argue, however, that attention mechanism is not the best location to inject the attributes. This is because we cannot obtain user- or product-biased sentiment information from the representation. In the example above, although we may be able to select, with user bias, the words wine and sweet in the text, we do not know whether the user has a positive or negative sentiment towards these words (e.g., Does the user like wine? How about sweet wines? etc.). In contrast, the three other locations we discuss below use the attributes to modify how the model looks at sentiment at different levels of textual granularity."
],
[
"Injecting attributes to the word embedding means that we bias the sentiment intensity of a word independent from its neighboring context. For example, if a user normally uses the words tasty and delicious with a less and more positive intensity, respectively, the corresponding attribute-injected word embeddings would come out less similar, despite both words being synonymous."
],
[
"Injecting attributes to the encoder means that we bias the contextualization of words based on their neighbors in the text. For example, if a user likes their cake sweet but their drink with no sugar, the attribute-injected encoder would give a positive signal to the encoding of sweet in the text “the cake was sweet” and a negative signal in the text “the drink was sweet”."
],
[
"Injecting attributes to the classifier means that we bias the probability distribution of sentiment based on the final document encoding. If a user tends to classify the sentiment of reviews about sweet cakes as highly positive, then the model would give a high probability to highly positive sentiment classes for texts such as “the cake was sweet”."
],
[
"We perform experiments on two tasks. The first task is Sentiment Classification, where we are tasked to classify the sentiment of a review text, given additionally the user and product information as attributes. The second task is Attribute Transfer, where we attempt to transfer the attribute encodings learned from the sentiment classification model to solve two other different tasks: (a) Product Category Classification, where we are tasked to classify the category of the product, and (b) Review Headline Generation, where we are tasked to generate the title of the review, given only both the user and product attribute encodings. Datasets, evaluation metrics, and competing models are different for each task and are described in their corresponding sections.",
"Unless otherwise stated, our models are implemented with the following settings. We set the dimensions of the word, user, and product vectors to 300. We use pre-trained GloVe embeddings BIBREF28 to initialize the word vectors. We also set the dimensions of the hidden state of BiLSTM to 300 (i.e., 150 dimensions for each of the forward/backward hidden state). The chunk size factors $C_1$ and $C_2$ are both set to 15. We use dropout BIBREF29 on all non-linear connections with a dropout rate of 0.1. We set the batch size to 32. Training is done via stochastic gradient descent over shuffled mini-batches with the Adadelta update rule BIBREF30 and with $l_2$ constraint BIBREF31 of 3. We perform early stopping using the development set. Training and experiments are done using an NVIDIA GeForce GTX 1080 Ti graphics card."
],
[
"We use the three widely used sentiment classification datasets with user and product information available: IMDB, Yelp 2013, and Yelp 2014 datasets. These datasets are curated by BIBREF12, where they ensured twenty-core for both users and products (i.e., users have at least twenty products and vice versa), split them into train, dev, and test sets with an 8:1:1 ratio, and tokenized and sentence-split using the Stanford CoreNLP BIBREF32. Dataset statistics are shown in Table TABREF20. Evaluation is done using two metrics: the accuracy which measures the overall sentiment classification performance, and RMSE which measures the divergence between predicted and ground truth classes."
],
[
"To conduct a fair comparison among the different methods described in Section SECREF2, we compare these methods when applied to our base model using the development set of the datasets. Specifically, we use a smaller version of our base model (with dimensions set to 64) and incorporate the user and product attributes using nine different approaches: (1) bias-attention: the bias-based method injected to the attention mechanism, (2-5) the matrix-based method injected to four different locations (matrix-embedding, matrix-encoder, matrix-attention, matrix-classifier), and (6-9) the CHIM-based method injected to four different locations (CHIM-embedding, CHIM-encoder, CHIM-attention, CHIM-classifier). We then calculate the accuracy of each approach for all datasets.",
"Results are shown in Figure FIGREF25. The figure shows that bias-attention consistently performs poorly compared to other approaches. As expected, matrix-based representations perform the worst when injected to embeddings and encoder, however we can already see improvements over bias-attention when these representations are injected to attention and classifier. This is because the number of parameters used in the the weight matrices of attention and classifier are relatively smaller compared to those of embeddings and encoder, thus they are easier to optimize. The CHIM-based representations perform the best among other approaches, where CHIM-embedding garners the highest accuracy across datasets. Finally, even when using a better representation method, CHIM-attention consistently performs the worst among CHIM-based representations. This shows that attention mechanism is not the optimal location to inject attributes."
],
[
"We also compare with models from previous work, listed below:",
"UPNN BIBREF12 uses a CNN classifier as base model and incorporates attributes as user- and product-specific weight parameters in the word embeddings and logistic classifier.",
"UPDMN BIBREF16 uses an LSTM classifier as base model and incorporates attributes as a separate deep memory network that uses other related documents as memory.",
"NSC BIBREF7 uses a hierarchical LSTM classifier as base model and incorporates attributes using the bias-attention method on both word- and sentence-level LSTMs.",
"DUPMN BIBREF17 also uses a hierarchical LSTM as base model and incorporates attributes as two separate deep memory network, one for each attribute.",
"PMA BIBREF14 is similar to NSC but uses external features such as the ranking preference method of a specific user.",
"HCSC BIBREF9 uses a combination of BiLSTM and CNN as base model, incorporates attributes using the bias-attention method, and also considers the existence of cold start entities.",
"CMA BIBREF15 uses a combination of LSTM and hierarchical attention classifier as base model, incorporates attributes using the bias-attention method, and does this separately for user and product.",
"Notice that most of these models, especially the later ones, use the bias-attention method to represent and inject attributes, but also employ a more complex model architecture to enjoy a boost in performance. Results are summarized in Table TABREF33. On all three datasets, our best results outperform all previous models based on accuracy and RMSE. Among our four models, CHIM-embedding performs the best in terms of accuracy, with performance increases of 2.4%, 1.3%, and 1.6% on IMDB, Yelp 2013, and Yelp 2014, respectively. CHIM-classifier performs the best in terms of RMSE, outperforming all other models on both Yelp 2013 and 2014 datasets. Among our models, CHIM-attention mechanism performs the worst, which shows similar results to our previous experiment (see Figure FIGREF25). We emphasize that our models use a simple BiLSTM as base model, and extensions to the base model (e.g., using multiple hierarchical LSTMs as in BIBREF21), as well as to other aspects (e.g., consideration of cold-start entities as in BIBREF9), are orthogonal to our proposed attribute representation and injection method. Thus, we expect a further increase in performance when these extensions are done."
],
[
"In this section, we investigate whether it is possible to transfer the attribute encodings, learned from the sentiment classification model, to other tasks: product category classification and review headline generation. The experimental setup is as follows. First, we train a sentiment classification model using an attribute representation and injection method of choice to learn the attribute encodings. Then, we use these fixed encodings as input to the task-specific model."
],
[
"We collected a new dataset from Amazon, which includes the product category and the review headline, aside from the review text, the sentiment score, and the user and product attributes. Following BIBREF12, we ensured that both users and products are twenty-core, split them into train, dev, and test sets with an 8:1:1 ratio, and tokenized and sentence-split the text using Stanford CoreNLP BIBREF32. The final dataset contains 77,028 data points, with 1,728 users and 1,890 products. This is used as the sentiment classification dataset. To create the task-specific datasets, we split the dataset again such that no users and no products are seen in at least two different splits. That is, if user $u$ is found in the train set, then it should not be found in the dev and the test sets. We remove the user-product pairs that do not satistfy this condition. We then append the corresponding product category and review headline for each user-product pair. The final split contains 46,151 training, 711 development, and 840 test instances. It also contains two product categories: Music and Video DVD. The review headline is tokenized using SentencePiece with 10k vocabulary. The datasets are released here for reproducibility: https://github.com/rktamplayo/CHIM."
],
[
"In this experiment, we compare five different attribute representation and injection methods: (1) the bias-attention method, and (2-5) the CHIM-based representation method injected to all four different locations in the model. We use the attribute encodings, which are learned from pre-training on the sentiment classification dataset, as input to the transfer tasks, in which they are fixed and not updated during training. As a baseline, we also show results when using encodings of randomly set weights. Moreover, we additionally show the majority class as additional baseline for product category classification. For the product category classification task, we use a logistic classifier as the classification model and accuracy as the evaluation metric. For the review headline generation task, we use an LSTM decoder as the generation model and perplexity as the evaluation metric."
],
[
"For the product category classification task, the results are reported in Table TABREF47. The table shows that representations learned from CHIM-based methods perform better than the random baseline. The best model, CHIM-encoder, achieves an increase of at least 3 points in accuracy compared to the baseline. This means that, interestingly, CHIM-based attribute representations have also learned information about the category of the product. In contrast, representations learned from the bias-attention method are not able to transfer well on this task, leading to worse results compared to the random and majority baseline. Moreover, CHIM-attention performs the worst among CHIM-based models, which further shows the ineffectiveness of injecting attributes to the attention mechanism.",
"Results for the review headline generation task are also shown in Table TABREF47. The table shows less promising results, where the best model, CHIM-encoder, achieves a decrease of 0.88 points in perplexity from the random encodings. Although this still means that some information has been transferred, one may argue that the gain is too small to be considered significant. However, it has been well perceived, that using only the user and product attributes to generate text is unreasonable, since we expect the model to generate coherent texts using only two vectors. This impossibility is also reported by BIBREF8 where they also used sentiment information, and BIBREF33 where they additionally used learned aspects and a short version of the text to be able to generate well-formed texts. Nevertheless, the results in this experiment agree to the results above regarding injecting attributes to the attention mechanism; bias-attention performs worse than the random baseline, and CHIM-attention performs the worst among CHIM-based models."
],
[
"All our experiments unanimously show that (a) the bias-based attribute representation method is not the most optimal method, and (b) injecting attributes in the attention mechanism results to the worst performance among all locations in the model, regardless of the representation method used. The question “where is the best location to inject attributes?” remains unanswered, since different tasks and settings produce different best models. That is, CHIM-embedding achieves the best accuracy while CHIM-classifier achieves the best RMSE on sentiment classification. Moreover, CHIM-encoder produces the most transferable attribute encoding for both product category classification and review headline generation. The suggestion then is to conduct experiments on all locations and check which one is best for the task at hand.",
"Finally, we also investigate whether injecting in to more than one location would result to better performance. Specifically, we jointly inject in two different locations at once using CHIM, and do this for all possible pairs of locations. We use the smaller version of our base model and calculate the accuracies of different models using the development set of the Yelp 2013 dataset. Figure FIGREF49 shows a heatmap of the accuracies of jointly injected models, as well as singly injected models. Overall, the results are mixed and can be summarized into two statements. Firstly, injecting on the embedding and another location (aside from the attention mechanism) leads to a slight decrease in performance. Secondly and interestingly, injecting on the attention mechanism and another location always leads to the highest increase in performance, where CHIM-attention+embedding performs the best, outperforming CHIM-embedding. This shows that injecting in different locations might capture different information, and we leave this investigation for future work."
],
[
"Aside from user and product information, other attributes have been used for sentiment classification. Location-based BIBREF34 and time-based BIBREF35 attributes help contextualize the sentiment geographically and temporally. Latent attributes that are learned from another model have also been employed as additional features, such as latent topics from a topic model BIBREF36, latent aspects from an aspect extraction model BIBREF37, argumentation features BIBREF38, among others. Unfortunately, current benchmark datasets do not include these attributes, thus it is practically impossible to compare and use these attributes in our experiments. Nevertheless, the methods in this paper are not limited to only user and product attributes, but also to these other attributes as well, whenever available."
],
[
"Incorporating user and product attributes to NLP models makes them more personalized and thus user satisfaction can be increased BIBREF39. Examples of other NLP tasks that use these attributes are text classification BIBREF27, language modeling BIBREF26, text generation BIBREF8, BIBREF33, review summarization BIBREF40, machine translation BIBREF41, and dialogue response generation BIBREF42. On these tasks, the usage of the bias-attention method is frequent since it is trivially easy and there have been no attempts to investigate different possible methods for attribute representation and injection. We expect this paper to serve as the first investigatory paper that contradicts to the positive results previous work have seen from the bias-attention method."
],
[
"We showed that the current accepted standard for attribute representation and injection, i.e. bias-attention, which incorporates attributes as additional biases in the attention mechanism, is the least effective method. We proposed to represent attributes as chunk-wise importance weight matrices (CHIM) and showed that this representation method significantly outperforms the bias-attention method. Despite using a simple BiLSTM classifier as base model, CHIM significantly outperforms the current state-of-the-art models, even when those models use a more complex base model architecture. Furthermore, we conducted several experiments that conclude that injection to the attention mechanism, no matter which representation method is used, garners the worst performance. This result contradicts previously reported conclusions regarding attribute injection to the attention mechanism. Finally, we show promising results on transferring the attribute representations from sentiment classification, and use them to two different tasks such as product category classification and review headline generation."
],
[
"We would like to thank the anonymous reviewers for their helpful feedback and suggestions. Reinald Kim Amplayo is grateful to be supported by a Google PhD Fellowship."
]
],
"section_name": [
"Introduction",
"How and Where to Inject Attributes?",
"How and Where to Inject Attributes? ::: The Base Model",
"How and Where to Inject Attributes? ::: How: Attribute Representation",
"How and Where to Inject Attributes? ::: How: Attribute Representation ::: Bias-based",
"How and Where to Inject Attributes? ::: How: Attribute Representation ::: Matrix-based",
"How and Where to Inject Attributes? ::: How: Attribute Representation ::: CHIM-based",
"How and Where to Inject Attributes? ::: Where: Attribute Injection",
"How and Where to Inject Attributes? ::: Where: Attribute Injection ::: In the attention mechanism",
"How and Where to Inject Attributes? ::: Where: Attribute Injection ::: In the word embedding",
"How and Where to Inject Attributes? ::: Where: Attribute Injection ::: In the BiLSTM encoder",
"How and Where to Inject Attributes? ::: Where: Attribute Injection ::: In the logistic classifier",
"Experiments ::: General Setup",
"Experiments ::: Sentiment Classification ::: Datasets and Evaluation",
"Experiments ::: Sentiment Classification ::: Comparisons of different attribute representation and injection methods",
"Experiments ::: Sentiment Classification ::: Comparisons with models in the literature",
"Experiments ::: Attribute Transfer",
"Experiments ::: Attribute Transfer ::: Dataset",
"Experiments ::: Attribute Transfer ::: Evaluation",
"Experiments ::: Attribute Transfer ::: Results",
"Experiments ::: Where should attributes be injected?",
"Related Work ::: Attributes for Sentiment Classification",
"Related Work ::: User/Product Attributes for NLP Tasks",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"b2626fd453b809bc401dd0d2a1496215f5ab25cd",
"c7b4466e70f0d06bc3c358321a9bc95930c8102c"
],
"answer": [
{
"evidence": [
"Notice that most of these models, especially the later ones, use the bias-attention method to represent and inject attributes, but also employ a more complex model architecture to enjoy a boost in performance. Results are summarized in Table TABREF33. On all three datasets, our best results outperform all previous models based on accuracy and RMSE. Among our four models, CHIM-embedding performs the best in terms of accuracy, with performance increases of 2.4%, 1.3%, and 1.6% on IMDB, Yelp 2013, and Yelp 2014, respectively. CHIM-classifier performs the best in terms of RMSE, outperforming all other models on both Yelp 2013 and 2014 datasets. Among our models, CHIM-attention mechanism performs the worst, which shows similar results to our previous experiment (see Figure FIGREF25). We emphasize that our models use a simple BiLSTM as base model, and extensions to the base model (e.g., using multiple hierarchical LSTMs as in BIBREF21), as well as to other aspects (e.g., consideration of cold-start entities as in BIBREF9), are orthogonal to our proposed attribute representation and injection method. Thus, we expect a further increase in performance when these extensions are done."
],
"extractive_spans": [
"with performance increases of 2.4%, 1.3%, and 1.6% on IMDB, Yelp 2013, and Yelp 2014, respectively"
],
"free_form_answer": "",
"highlighted_evidence": [
"On all three datasets, our best results outperform all previous models based on accuracy and RMSE. Among our four models, CHIM-embedding performs the best in terms of accuracy, with performance increases of 2.4%, 1.3%, and 1.6% on IMDB, Yelp 2013, and Yelp 2014, respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Notice that most of these models, especially the later ones, use the bias-attention method to represent and inject attributes, but also employ a more complex model architecture to enjoy a boost in performance. Results are summarized in Table TABREF33. On all three datasets, our best results outperform all previous models based on accuracy and RMSE. Among our four models, CHIM-embedding performs the best in terms of accuracy, with performance increases of 2.4%, 1.3%, and 1.6% on IMDB, Yelp 2013, and Yelp 2014, respectively. CHIM-classifier performs the best in terms of RMSE, outperforming all other models on both Yelp 2013 and 2014 datasets. Among our models, CHIM-attention mechanism performs the worst, which shows similar results to our previous experiment (see Figure FIGREF25). We emphasize that our models use a simple BiLSTM as base model, and extensions to the base model (e.g., using multiple hierarchical LSTMs as in BIBREF21), as well as to other aspects (e.g., consideration of cold-start entities as in BIBREF9), are orthogonal to our proposed attribute representation and injection method. Thus, we expect a further increase in performance when these extensions are done."
],
"extractive_spans": [],
"free_form_answer": "Increase of 2.4%, 1.3%, and 1.6% accuracy on IMDB, Yelp 2013, and Yelp 2014",
"highlighted_evidence": [
"Among our four models, CHIM-embedding performs the best in terms of accuracy, with performance increases of 2.4%, 1.3%, and 1.6% on IMDB, Yelp 2013, and Yelp 2014, respectively. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"annotation_id": [
"0ef98bd1d311461934e57833ee4e6b4298937df9",
"1b12b52673d17d254844248f4f27a691d3966f06"
],
"answer": [
{
"evidence": [
"The results of our experiments can be summarized in three statements. First, our preliminary experiments show that doing bias-based attribute representation and attention-based injection is not an effective method to incorporate user and product information in sentiment classification models. Second, despite using only a simple BiLSTM with attention classifier, we significantly outperform previous state-of-the-art models that use more complicated architectures (e.g., models that use hierarchical models, external memory networks, etc.). Finally, we show that these attribute representations transfer well to other tasks such as product category classification and review headline generation."
],
"extractive_spans": [
"product category classification and review headline generation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, we show that these attribute representations transfer well to other tasks such as product category classification and review headline generation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform experiments on two tasks. The first task is Sentiment Classification, where we are tasked to classify the sentiment of a review text, given additionally the user and product information as attributes. The second task is Attribute Transfer, where we attempt to transfer the attribute encodings learned from the sentiment classification model to solve two other different tasks: (a) Product Category Classification, where we are tasked to classify the category of the product, and (b) Review Headline Generation, where we are tasked to generate the title of the review, given only both the user and product attribute encodings. Datasets, evaluation metrics, and competing models are different for each task and are described in their corresponding sections."
],
"extractive_spans": [
"Product Category Classification",
"Review Headline Generation"
],
"free_form_answer": "",
"highlighted_evidence": [
"The second task is Attribute Transfer, where we attempt to transfer the attribute encodings learned from the sentiment classification model to solve two other different tasks: (a) Product Category Classification, where we are tasked to classify the category of the product, and (b) Review Headline Generation, where we are tasked to generate the title of the review, given only both the user and product attribute encodings."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"39babc31a7797c3c36fdbf3b2e00d3e069b2dfc5",
"a2f9656b1d2e55eece238deb5ad27d5120c2e17a"
],
"answer": [
{
"evidence": [
"To conduct a fair comparison among the different methods described in Section SECREF2, we compare these methods when applied to our base model using the development set of the datasets. Specifically, we use a smaller version of our base model (with dimensions set to 64) and incorporate the user and product attributes using nine different approaches: (1) bias-attention: the bias-based method injected to the attention mechanism, (2-5) the matrix-based method injected to four different locations (matrix-embedding, matrix-encoder, matrix-attention, matrix-classifier), and (6-9) the CHIM-based method injected to four different locations (CHIM-embedding, CHIM-encoder, CHIM-attention, CHIM-classifier). We then calculate the accuracy of each approach for all datasets.",
"Results are shown in Figure FIGREF25. The figure shows that bias-attention consistently performs poorly compared to other approaches. As expected, matrix-based representations perform the worst when injected to embeddings and encoder, however we can already see improvements over bias-attention when these representations are injected to attention and classifier. This is because the number of parameters used in the the weight matrices of attention and classifier are relatively smaller compared to those of embeddings and encoder, thus they are easier to optimize. The CHIM-based representations perform the best among other approaches, where CHIM-embedding garners the highest accuracy across datasets. Finally, even when using a better representation method, CHIM-attention consistently performs the worst among CHIM-based representations. This shows that attention mechanism is not the optimal location to inject attributes.",
"FLOAT SELECTED: Figure 3: Accuracies (y-axis) of different attribute representation (bias, matrix, CHIM) and injection (emb: embed, enc: encode, att: attend, cls: classify) approaches on the development set of the datasets."
],
"extractive_spans": [],
"free_form_answer": "Best accuracy is for proposed CHIM methods (~56% IMDB, ~68.5 YELP datasets), most common bias attention (~53%IMDB, ~65%YELP), and oll others are worse than proposed method.",
"highlighted_evidence": [
"Specifically, we use a smaller version of our base model (with dimensions set to 64) and incorporate the user and product attributes using nine different approaches: (1) bias-attention: the bias-based method injected to the attention mechanism, (2-5) the matrix-based method injected to four different locations (matrix-embedding, matrix-encoder, matrix-attention, matrix-classifier), and (6-9) the CHIM-based method injected to four different locations (CHIM-embedding, CHIM-encoder, CHIM-attention, CHIM-classifier). We then calculate the accuracy of each approach for all datasets.\n\nResults are shown in Figure FIGREF25. The figure shows that bias-attention consistently performs poorly compared to other approaches. As expected, matrix-based representations perform the worst when injected to embeddings and encoder, however we can already see improvements over bias-attention when these representations are injected to attention and classifier.",
"FLOAT SELECTED: Figure 3: Accuracies (y-axis) of different attribute representation (bias, matrix, CHIM) and injection (emb: embed, enc: encode, att: attend, cls: classify) approaches on the development set of the datasets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Sentiment classification results of competing models based on accuracy and RMSE metrics on the three datasets. Underlined values correspond to the best values for each block. Boldfaced values correspond to the best values across the board. 1uses additional external features, 2uses a method that considers cold-start entities, 3uses separate bias-attention for user and product.",
"FLOAT SELECTED: Figure 4: Heatmap of the accuracies of singly and jointly injected CHIM models. Values on each cell represents either the accuracy (for singly injected models) or the difference between the singly and doubly injected models per row.",
"All our experiments unanimously show that (a) the bias-based attribute representation method is not the most optimal method, and (b) injecting attributes in the attention mechanism results to the worst performance among all locations in the model, regardless of the representation method used. The question “where is the best location to inject attributes?” remains unanswered, since different tasks and settings produce different best models. That is, CHIM-embedding achieves the best accuracy while CHIM-classifier achieves the best RMSE on sentiment classification. Moreover, CHIM-encoder produces the most transferable attribute encoding for both product category classification and review headline generation. The suggestion then is to conduct experiments on all locations and check which one is best for the task at hand."
],
"extractive_spans": [],
"free_form_answer": "Sentiment classification (datasets IMDB, Yelp 2013, Yelp 2014): \nembedding 56.4% accuracy, 1.161 RMSE, 67.8% accuracy, 0.646 RMSE, 69.2% accuracy, 0.629 RMSE;\nencoder 55.9% accuracy, 1.234 RMSE, 67.0% accuracy, 0.659 RMSE, 68.4% accuracy, 0.631 RMSE;\nattention 54.4% accuracy, 1.219 RMSE, 66.5% accuracy, 0.664 RMSE, 68.5% accuracy, 0.634 RMSE;\nclassifier 55.5% accuracy, 1.219 RMSE, 67.5% accuracy, 0.641 RMSE, 68.9% accuracy, 0.622 RMSE.\n\nProduct category classification and review headline generation:\nembedding 62.26 ± 0.22% accuracy, 42.71 perplexity;\nencoder 64.62 ± 0.34% accuracy, 42.65 perplexity;\nattention 60.95 ± 0.15% accuracy, 42.78 perplexity;\nclassifier 61.83 ± 0.43% accuracy, 42.69 perplexity.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Sentiment classification results of competing models based on accuracy and RMSE metrics on the three datasets. Underlined values correspond to the best values for each block. Boldfaced values correspond to the best values across the board. 1uses additional external features, 2uses a method that considers cold-start entities, 3uses separate bias-attention for user and product.",
"FLOAT SELECTED: Figure 4: Heatmap of the accuracies of singly and jointly injected CHIM models. Values on each cell represents either the accuracy (for singly injected models) or the difference between the singly and doubly injected models per row.",
"All our experiments unanimously show that (a) the bias-based attribute representation method is not the most optimal method, and (b) injecting attributes in the attention mechanism results to the worst performance among all locations in the model, regardless of the representation method used. The question “where is the best location to inject attributes?” remains unanswered, since different tasks and settings produce different best models. That is, CHIM-embedding achieves the best accuracy while CHIM-classifier achieves the best RMSE on sentiment classification. Moreover, CHIM-encoder produces the most transferable attribute encoding for both product category classification and review headline generation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How significant are the improvements over previous approaches?",
"Which other tasks are evaluated?",
"What are the performances associated to different attribute placing?"
],
"question_id": [
"620b6c410a055295d137511d3c99207a47c03b5e",
"e459760879f662b2205cbdc0f5396dbfe41323ae",
"1c3a20dceec2a86fb61e70fab97a9fb549b5c54c"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"sentiment ",
"sentiment ",
"sentiment "
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Illustrative examples of issues when representing attributes as biases and injecting them in the attention mechanism. The gray process icon indicates the model without incorporating attributes, while the same icon in green indicates the model customized for the green user.",
"Figure 2: CHIM-based attribute representation and injection to a non-linear funtion in the model.",
"Table 1: Statistics of the datasets used for the Sentiment Classification task.",
"Figure 3: Accuracies (y-axis) of different attribute representation (bias, matrix, CHIM) and injection (emb: embed, enc: encode, att: attend, cls: classify) approaches on the development set of the datasets.",
"Table 2: Sentiment classification results of competing models based on accuracy and RMSE metrics on the three datasets. Underlined values correspond to the best values for each block. Boldfaced values correspond to the best values across the board. 1uses additional external features, 2uses a method that considers cold-start entities, 3uses separate bias-attention for user and product.",
"Table 3: Accuracy (higher is better) and perplexity (lower is better) of competing models on the Amazon dataset for the transfer tasks on product category classification and review headline generation, respectively. Accuracy intervals are calculated by running the model 10 times. Performance worse than the random and majority baselines are colored red.",
"Figure 4: Heatmap of the accuracies of singly and jointly injected CHIM models. Values on each cell represents either the accuracy (for singly injected models) or the difference between the singly and doubly injected models per row."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Figure3-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"9-Figure4-1.png"
]
} | [
"How significant are the improvements over previous approaches?",
"What are the performances associated to different attribute placing?"
] | [
[
"1908.09590-Experiments ::: Sentiment Classification ::: Comparisons with models in the literature-8"
],
[
"1908.09590-7-Table2-1.png",
"1908.09590-6-Figure3-1.png",
"1908.09590-9-Figure4-1.png",
"1908.09590-Experiments ::: Sentiment Classification ::: Comparisons of different attribute representation and injection methods-0",
"1908.09590-Experiments ::: Sentiment Classification ::: Comparisons of different attribute representation and injection methods-1",
"1908.09590-Experiments ::: Where should attributes be injected?-0"
]
] | [
"Increase of 2.4%, 1.3%, and 1.6% accuracy on IMDB, Yelp 2013, and Yelp 2014",
"Sentiment classification (datasets IMDB, Yelp 2013, Yelp 2014): \nembedding 56.4% accuracy, 1.161 RMSE, 67.8% accuracy, 0.646 RMSE, 69.2% accuracy, 0.629 RMSE;\nencoder 55.9% accuracy, 1.234 RMSE, 67.0% accuracy, 0.659 RMSE, 68.4% accuracy, 0.631 RMSE;\nattention 54.4% accuracy, 1.219 RMSE, 66.5% accuracy, 0.664 RMSE, 68.5% accuracy, 0.634 RMSE;\nclassifier 55.5% accuracy, 1.219 RMSE, 67.5% accuracy, 0.641 RMSE, 68.9% accuracy, 0.622 RMSE.\n\nProduct category classification and review headline generation:\nembedding 62.26 ± 0.22% accuracy, 42.71 perplexity;\nencoder 64.62 ± 0.34% accuracy, 42.65 perplexity;\nattention 60.95 ± 0.15% accuracy, 42.78 perplexity;\nclassifier 61.83 ± 0.43% accuracy, 42.69 perplexity."
] | 308 |
1806.09652 | Neural Machine Translation for Low Resource Languages using Bilingual Lexicon Induced from Comparable Corpora | Resources for the non-English languages are scarce and this paper addresses this problem in the context of machine translation, by automatically extracting parallel sentence pairs from the multilingual articles available on the Internet. In this paper, we have used an end-to-end Siamese bidirectional recurrent neural network to generate parallel sentences from comparable multilingual articles in Wikipedia. Subsequently, we have showed that using the harvested dataset improved BLEU scores on both NMT and phrase-based SMT systems for the low-resource language pairs: English--Hindi and English--Tamil, when compared to training exclusively on the limited bilingual corpora collected for these language pairs. | {
"paragraphs": [
[
"Both neural and statistical machine translation approaches are highly reliant on the availability of large amounts of data and are known to perform poorly in low resource settings. Recent crowd-sourcing efforts and workshops on machine translation have resulted in small amounts of parallel texts for building viable machine translation systems for low-resource pairs BIBREF0 . But, they have been shown to suffer from low accuracy (incorrect translation) and low coverage (high out-of-vocabulary rates), due to insufficient training data. In this project, we try to address the high OOV rates in low-resource machine translation systems by leveraging the increasing amount of multilingual content available on the Internet for enriching the bilingual lexicon.",
"Comparable corpora such as Wikipedia, are collections of topic-aligned but non-sentence-aligned multilingual documents which are rich resources for extracting parallel sentences from. For example, Figure FIGREF1 shows that there are equivalent sentences on the page about Donald Trump in Tamil and English, and the phrase alignment for an example sentence is shown in Table TABREF4 .",
"Table TABREF2 shows that there are at least tens of thousands of bilingual articles on Wikipedia which could potentially have at least as many parallel sentences that could be mined to address the scarcity of parallel sentences as indicated in column 2 which shows the number of sentence-pairs in the largest available bilingual corpora for xx-en. As shown by BIBREF1 ( BIBREF1 ), the illustrated data sparsity can be addressed by extending the scarce parallel sentence-pairs with those automatically extracted from Wikipedia and thereby improving the performance of statistical machine translation systems.",
"In this paper, we will propose a neural approach to parallel sentence extraction and compare the BLEU scores of machine translation systems with and without the use of the extracted sentence pairs to justify the effectiveness of this method. Compared to previous approaches which require specialized meta-data from document structure or significant amount of hand-engineered features, the neural model for extracting parallel sentences is learned end-to-end using only a small bootstrap set of parallel sentence pairs."
],
[
"A lot of work has been done on the problem of automatic sentence alignment from comparable corpora, but a majority of them BIBREF2 , BIBREF1 , BIBREF3 use a pre-existing translation system as a precursor to ranking the candidate sentence pairs, which the low resource language pairs are not at the luxury of having; or use statistical machine learning approaches, where a Maximum Entropy classifier is used that relies on surface level features such as word overlap in order to obtain parallel sentence pairs BIBREF4 . However, the deep neural network model used in our paper is probably the first of its kind, which does not need any feature engineering and also does not need a pre-existing translation system.",
" BIBREF4 ( BIBREF4 ) proposed a parallel sentence extraction system which used comparable corpora from newspaper articles to extract the parallel sentence pairs. In this procedure, a maximum entropy classifier is designed for all sentence pairs possible from the Cartesian product of a pair of documents and passed through a sentence-length ratio filter in order to obtain candidate sentence pairs. SMT systems were trained on the extracted sentence pairs using the additional features from the comparable corpora like distortion and position of current and previously aligned sentences. This resulted in a state of the art approach with respect to the translation performance of low resource languages.",
"Similar to our proposed approach, BIBREF5 ( BIBREF5 ) showed how using parallel documents from Wikipedia for domain specific alignment would improve translation quality of SMT systems on in-domain data. In this method, similarity between all pairs of cross-language sentences with different text similarity measures are estimated. The issue of domain definition is overcome by the use of IR techniques which use the characteristic vocabulary of the domain to query a Lucene search engine over the entire corpus. The candidate sentences are defined based on word overlap and the decision whether a sentence pair is parallel or not using the maximum entropy classifier. The difference in the BLEU scores between out of domain and domain-specific translation is proved clearly using the word embeddings from characteristic vocabulary extracted using the extracted additional bitexts.",
" BIBREF2 ( BIBREF2 ) extract parallel sentences without the use of a classifier. Target language candidate sentences are found using the translation of source side comparable corpora. Sentence tail removal is used to strip the tail parts of sentence pairs which differ only at the end. This, along with the use of parallel sentences enhanced the BLEU score and helped to determine if the translated source sentence and candidate target sentence are parallel by measuring the word and translation error rate. This method succeeds in eliminating the need for domain specific text by using the target side as a source of candidate sentences. However, this approach is not feasible if there isn't a good source side translation system to begin with, like in our case.",
"Yet another approach which uses an existing translation system to extract parallel sentences from comparable documents was proposed by BIBREF3 ( BIBREF3 ). They describe a framework for machine translation using multilingual Wikipedia articles. The parallel corpus is assembled iteratively, by using a statistical machine translation system trained on a preliminary sentence-aligned corpus, to score sentence-level en–jp BLEU scores. After filtering out the unaligned pairs based on the MT evaluation metric, the SMT is retrained on the filtered pairs."
],
[
"In this section, we will describe the entire pipeline, depicted in Figure FIGREF5 , which is involved in training a parallel sentence extraction system, and also to infer and decode high-precision nearly-parallel sentence-pairs from bilingual article pages collected from Wikipedia."
],
[
"The parallel sentence extraction system needs a sentence aligned corpus which has been curated. These sentences were used as the ground truth pairs when we trained the model to classify parallel sentence pair from non-parallel pairs."
],
[
"The binary classifier described in the next section, assigns a translation probability score to a given sentence pair, after learning from examples of translations and negative examples of non-translation pairs. For, this we make a simplistic assumption that the parallel sentence pairs found in the bootstrap dataset are unique combinations, which fail being translations of each other, when we randomly pick a sentence from both the sets. Thus, there might be cases of false negatives due to the reliance on unsupervised random sampling for generation of negative labels.",
"Therefore at the beginning of every epoch, we randomly sample INLINEFORM0 negative sentences of the target language for every source sentence. From a few experiments and also from the literature, we converged on INLINEFORM1 to be performing the best, given our compute constraints."
],
[
"Here, we describe the neural network architecture as shown in BIBREF6 ( BIBREF6 ), where the network learns to estimate the probability that the sentences in a given sentence pair, are translations of each other, INLINEFORM0 , where INLINEFORM1 is the candidate source sentence in the given pair, and INLINEFORM2 is the candidate target sentence.",
"As illustrated in Figure FIGREF5 (d), the architecture uses a siamese network BIBREF7 , consisting of a bidirectional RNN BIBREF8 sentence encoder with recurrent units such as long short-term memory units, or LSTMs BIBREF9 and gated recurrent units, or GRUs BIBREF10 learning a vector representation for the source and target sentences and the probability of any given pair of sentences being translations of each other. For seq2seq architectures, especially in translation, we have found the that the recommended recurrent unit is GRU, and all our experiments use this over LSTM.",
"The forward RNN reads the variable-length sentence and updates its recurrent state from the first token until the last one to create a fixed-size continuous vector representation of the sentence. The backward RNN processes the sentence in reverse. In our experiments, we use the concatenation of the last recurrent state in both directions as a final representation INLINEFORM0 DISPLAYFORM0 ",
"where INLINEFORM0 is the gated recurrent unit (GRU). After both source and target sentences have been encoded, we capture their matching information by using their element-wise product and absolute element-wise difference. We estimate the probability that the sentences are translations of each other by feeding the matching vectors into fully connected layers: DISPLAYFORM0 ",
"where INLINEFORM0 is the sigmoid function, INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 and INLINEFORM5 are model parameters. The model is trained by minimizing the cross entropy of our labeled sentence pairs: DISPLAYFORM0 ",
"where INLINEFORM0 is the number of source sentences and INLINEFORM1 is the number of candidate target sentences being considered.",
"For prediction, a sentence pair is classified as parallel if the probability score is greater than or equal to a decision threshold INLINEFORM0 that we need to fix. We found that to get high precision sentence pairs, we had to use INLINEFORM1 , and if we were able to sacrifice some precision for recall, a lower INLINEFORM2 of 0.80 would work in the favor of reducing OOV rates. DISPLAYFORM0 "
],
[
"We experimented with two language pairs: English – Hindi (en–hi) and English – Tamil (en–ta). The parallel sentence extraction systems for both en–ta and en–hi were trained using the architecture described in SECREF7 on the following bootstrap set of parallel corpora:",
"An English-Tamil parallel corpus BIBREF11 containing a total of INLINEFORM0 sentence pairs, composed of INLINEFORM1 English Tokens and INLINEFORM2 Tamil Tokens.",
"An English-Hindi parallel corpus BIBREF12 containing a total of INLINEFORM0 sentence pairs, from which a set of INLINEFORM1 sentence pairs were picked randomly.",
"Subsequently, we extracted parallel sentences using the trained model, and parallel articles collected from Wikipedia. There were INLINEFORM0 bilingual English-Tamil and INLINEFORM1 English-Hindi titles on the Wikimedia dumps collected in December 2017.",
"",
""
],
[
"For the evaluation of the performance of our sentence extraction models, we looked at a few sentences manually, and have done a qualitative analysis, as there was no gold standard evaluation set for sentences extracted from Wikipedia. In Table TABREF13 , we can see the qualitative accuracy for some parallel sentences extracted from Tamil. The sentences extracted from Tamil, have been translated to English using Google Translate, so as to facilitate a comparison with the sentences extracted from English.",
"For the statistical machine translation and neural machine translation evaluation we use the BLEU score BIBREF13 as an evaluation metric, computed using the multi-bleu script from Moses BIBREF14 ."
],
[
"Figures FIGREF16 shows the number of high precision sentences that were extracted at INLINEFORM0 without greedy decoding. Greedy decoding could be thought of as sampling without replacement, where a sentence that's already been extracted on one side of the extraction system, is precluded from being considered again. Hence, the number of sentences without greedy decoding, are of an order of magnitude higher than with decoding, as can be seen in Figure FIGREF16 ."
],
[
"We evaluated the quality of the extracted parallel sentence pairs, by performing machine translation experiments on the augmented parallel corpus.",
"As the dataset for training the machine translation systems, we used high precision sentences extracted with greedy decoding, by ranking the sentence-pairs on their translation probabilities. Phrase-Based SMT systems were trained using Moses BIBREF14 . We used the grow-diag-final-and heuristic for extracting phrases, lexicalised reordering and Batch MIRA BIBREF15 for tuning (the default parameters on Moses). We trained 5-gram language models with Kneser-Ney smoothing using KenLM BIBREF16 . With these parameters, we trained SMT systems for en–ta and en–hi language pairs, with and without the use of extracted parallel sentence pairs.",
"For training neural machine translation models, we used the TensorFlow BIBREF17 implementation of OpenNMT BIBREF18 with attention-based transformer architecture BIBREF19 . The BLEU scores for the NMT models were higher than for SMT models, for both en–ta and en–hi pairs, as can be seen in Table TABREF23 ."
],
[
"In this paper, we evaluated the benefits of using a neural network procedure to extract parallel sentences. Unlike traditional translation systems which make use of multi-step classification procedures, this method requires just a parallel corpus to extract parallel sentence pairs using a Siamese BiRNN encoder using GRU as the activation function.",
"This method is extremely beneficial for translating language pairs with very little parallel corpora. These parallel sentences facilitate significant improvement in machine translation quality when compared to a generic system as has been shown in our results.",
"The experiments are shown for English-Tamil and English-Hindi language pairs. Our model achieved a marked percentage increase in the BLEU score for both en–ta and en–hi language pairs. We demonstrated a percentage increase in BLEU scores of 11.03% and 14.7% for en–ta and en–hi pairs respectively, due to the use of parallel-sentence pairs extracted from comparable corpora using the neural architecture.",
"As a follow-up to this work, we would be comparing our framework against other sentence alignment methods described in BIBREF20 , BIBREF21 , BIBREF22 and BIBREF23 . It has also been interesting to note that the 2018 edition of the Workshop on Machine Translation (WMT) has released a new shared task called Parallel Corpus Filtering where participants develop methods to filter a given noisy parallel corpus (crawled from the web), to a smaller size of high quality sentence pairs. This would be the perfect avenue to test the efficacy of our neural network based approach of extracting parallel sentences from unaligned corpora."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Bootstrap Dataset",
"Negative Sampling",
"Model",
"Dataset",
"Evaluation Metrics",
"Sentence Alignment",
"Machine Translation",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"3add51fc1b2360bd58f492bcb986f22ea223ab05",
"a1c5daaf298c140fb8e700f2b57c7465152ad350"
],
"answer": [
{
"evidence": [
"For the evaluation of the performance of our sentence extraction models, we looked at a few sentences manually, and have done a qualitative analysis, as there was no gold standard evaluation set for sentences extracted from Wikipedia. In Table TABREF13 , we can see the qualitative accuracy for some parallel sentences extracted from Tamil. The sentences extracted from Tamil, have been translated to English using Google Translate, so as to facilitate a comparison with the sentences extracted from English.",
"We evaluated the quality of the extracted parallel sentence pairs, by performing machine translation experiments on the augmented parallel corpus."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" In Table TABREF13 , we can see the qualitative accuracy for some parallel sentences extracted from Tamil.",
"We evaluated the quality of the extracted parallel sentence pairs, by performing machine translation experiments on the augmented parallel corpus."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"For the evaluation of the performance of our sentence extraction models, we looked at a few sentences manually, and have done a qualitative analysis, as there was no gold standard evaluation set for sentences extracted from Wikipedia. In Table TABREF13 , we can see the qualitative accuracy for some parallel sentences extracted from Tamil. The sentences extracted from Tamil, have been translated to English using Google Translate, so as to facilitate a comparison with the sentences extracted from English."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For the evaluation of the performance of our sentence extraction models, we looked at a few sentences manually, and have done a qualitative analysis, as there was no gold standard evaluation set for sentences extracted from Wikipedia. In Table TABREF13 , we can see the qualitative accuracy for some parallel sentences extracted from Tamil."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0f3f718fe341e82ae1e7566230cd350368cb4a76",
"edad8bea1964cba8c821a6ca03e0b13b61f4405c"
],
"answer": [
{
"evidence": [
"Subsequently, we extracted parallel sentences using the trained model, and parallel articles collected from Wikipedia. There were INLINEFORM0 bilingual English-Tamil and INLINEFORM1 English-Hindi titles on the Wikimedia dumps collected in December 2017."
],
"extractive_spans": [
"INLINEFORM0 bilingual English-Tamil and INLINEFORM1 English-Hindi titles on the Wikimedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"Subsequently, we extracted parallel sentences using the trained model, and parallel articles collected from Wikipedia. There were INLINEFORM0 bilingual English-Tamil and INLINEFORM1 English-Hindi titles on the Wikimedia dumps collected in December 2017."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Subsequently, we extracted parallel sentences using the trained model, and parallel articles collected from Wikipedia. There were INLINEFORM0 bilingual English-Tamil and INLINEFORM1 English-Hindi titles on the Wikimedia dumps collected in December 2017."
],
"extractive_spans": [
"INLINEFORM0 bilingual English-Tamil",
"INLINEFORM1 English-Hindi titles"
],
"free_form_answer": "",
"highlighted_evidence": [
"There were INLINEFORM0 bilingual English-Tamil and INLINEFORM1 English-Hindi titles on the Wikimedia dumps collected in December 2017."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7e145a7e608c2c69f3282d0f229eab12bb11e69e",
"cfa621c712edef20f3c30b08a21bc0499d5da915"
],
"answer": [
{
"evidence": [
"As the dataset for training the machine translation systems, we used high precision sentences extracted with greedy decoding, by ranking the sentence-pairs on their translation probabilities. Phrase-Based SMT systems were trained using Moses BIBREF14 . We used the grow-diag-final-and heuristic for extracting phrases, lexicalised reordering and Batch MIRA BIBREF15 for tuning (the default parameters on Moses). We trained 5-gram language models with Kneser-Ney smoothing using KenLM BIBREF16 . With these parameters, we trained SMT systems for en–ta and en–hi language pairs, with and without the use of extracted parallel sentence pairs."
],
"extractive_spans": [],
"free_form_answer": "Phrase-Based SMT systems were trained using Moses, grow-diag-final-and heuristic were used for extracting phrases, and lexicalised reordering and Batch MIRA for tuning.",
"highlighted_evidence": [
"Phrase-Based SMT systems were trained using Moses BIBREF14 . We used the grow-diag-final-and heuristic for extracting phrases, lexicalised reordering and Batch MIRA BIBREF15 for tuning (the default parameters on Moses). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As the dataset for training the machine translation systems, we used high precision sentences extracted with greedy decoding, by ranking the sentence-pairs on their translation probabilities. Phrase-Based SMT systems were trained using Moses BIBREF14 . We used the grow-diag-final-and heuristic for extracting phrases, lexicalised reordering and Batch MIRA BIBREF15 for tuning (the default parameters on Moses). We trained 5-gram language models with Kneser-Ney smoothing using KenLM BIBREF16 . With these parameters, we trained SMT systems for en–ta and en–hi language pairs, with and without the use of extracted parallel sentence pairs."
],
"extractive_spans": [
"Moses BIBREF14"
],
"free_form_answer": "",
"highlighted_evidence": [
"Phrase-Based SMT systems were trained using Moses BIBREF14 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"529a59329b29888475c94ebec42a51c717c72ca0",
"d4f7c1278215db411dc5912e84397b74a2871ee5"
],
"answer": [
{
"evidence": [
"For training neural machine translation models, we used the TensorFlow BIBREF17 implementation of OpenNMT BIBREF18 with attention-based transformer architecture BIBREF19 . The BLEU scores for the NMT models were higher than for SMT models, for both en–ta and en–hi pairs, as can be seen in Table TABREF23 ."
],
"extractive_spans": [
" TensorFlow BIBREF17 implementation of OpenNMT"
],
"free_form_answer": "",
"highlighted_evidence": [
"For training neural machine translation models, we used the TensorFlow BIBREF17 implementation of OpenNMT BIBREF18 with attention-based transformer architecture BIBREF19 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For training neural machine translation models, we used the TensorFlow BIBREF17 implementation of OpenNMT BIBREF18 with attention-based transformer architecture BIBREF19 . The BLEU scores for the NMT models were higher than for SMT models, for both en–ta and en–hi pairs, as can be seen in Table TABREF23 ."
],
"extractive_spans": [
"OpenNMT BIBREF18 with attention-based transformer architecture BIBREF19"
],
"free_form_answer": "",
"highlighted_evidence": [
"For training neural machine translation models, we used the TensorFlow BIBREF17 implementation of OpenNMT BIBREF18 with attention-based transformer architecture BIBREF19 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"458539dafcda474add2d91c6bac0ad59f2e9a752",
"b086a9bbb24493d523af9c1497005b12b2d73d8a"
],
"answer": [
{
"evidence": [
"The experiments are shown for English-Tamil and English-Hindi language pairs. Our model achieved a marked percentage increase in the BLEU score for both en–ta and en–hi language pairs. We demonstrated a percentage increase in BLEU scores of 11.03% and 14.7% for en–ta and en–hi pairs respectively, due to the use of parallel-sentence pairs extracted from comparable corpora using the neural architecture."
],
"extractive_spans": [
" 11.03% and 14.7% for en–ta and en–hi pairs respectively"
],
"free_form_answer": "",
"highlighted_evidence": [
" We demonstrated a percentage increase in BLEU scores of 11.03% and 14.7% for en–ta and en–hi pairs respectively, due to the use of parallel-sentence pairs extracted from comparable corpora using the neural architecture."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The experiments are shown for English-Tamil and English-Hindi language pairs. Our model achieved a marked percentage increase in the BLEU score for both en–ta and en–hi language pairs. We demonstrated a percentage increase in BLEU scores of 11.03% and 14.7% for en–ta and en–hi pairs respectively, due to the use of parallel-sentence pairs extracted from comparable corpora using the neural architecture."
],
"extractive_spans": [
"11.03% and 14.7% for en–ta and en–hi pairs respectively"
],
"free_form_answer": "",
"highlighted_evidence": [
"We demonstrated a percentage increase in BLEU scores of 11.03% and 14.7% for en–ta and en–hi pairs respectively, due to the use of parallel-sentence pairs extracted from comparable corpora using the neural architecture."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate their parallel sentence generation?",
"How much data do they manage to gather online?",
"Which models do they use for phrase-based SMT?",
"Which models do they use for NMT?",
"What are the BLEU performance improvements they achieve?"
],
"question_id": [
"1f053f338df6d238cb163af1a0b1b073e749ed8a",
"fb06ed5cf9f04ff2039298af33384ca71ddbb461",
"754d7475b8bf50499ed77328b4b0eeedf9cb2623",
"1d10e069b4304fabfbed69acf409f0a311bdc441",
"718c0232b1f15ddb73d40c3afbd6c5c0d0354566"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1. No. of articles on Wikipedia against the available parallel sentences in X-En Corpus.",
"Figure 1. The EN-TA parallel pairs aligned word by word. This corresponds to the Figure 2.",
"Figure 2. A comparison of parallel sentences found in multilingual Wikipedia articles about Donald Trump in English and Tamil.",
"Figure 3. Architecture for the parallel sentence extraction system including training and inference pipelines. Some notations: en - English, ta - Tamil",
"Figure 4. No of parallel sentences extracted from 10,000 parallel Wikipedia article pairs at different thresholds without greedy decoding",
"Figure 5. No of parallel sentences extracted from 10,000 parallel Wikipedia article pairs at different thresholds with greedy decoding.",
"Table 3. BLEU score results for En-Ta"
],
"file": [
"1-Table1-1.png",
"1-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"5-Figure4-1.png",
"5-Figure5-1.png",
"5-Table3-1.png"
]
} | [
"Which models do they use for phrase-based SMT?"
] | [
[
"1806.09652-Machine Translation-1"
]
] | [
"Phrase-Based SMT systems were trained using Moses, grow-diag-final-and heuristic were used for extracting phrases, and lexicalised reordering and Batch MIRA for tuning."
] | 310 |
2002.00175 | UIT-ViIC: A Dataset for the First Evaluation on Vietnamese Image Captioning | Image Captioning, the task of automatic generation of image captions, has attracted attentions from researchers in many fields of computer science, being computer vision, natural language processing and machine learning in recent years. This paper contributes to research on Image Captioning task in terms of extending dataset to a different language - Vietnamese. So far, there is no existed Image Captioning dataset for Vietnamese language, so this is the foremost fundamental step for developing Vietnamese Image Captioning. In this scope, we first build a dataset which contains manually written captions for images from Microsoft COCO dataset relating to sports played with balls, we called this dataset UIT-ViIC. UIT-ViIC consists of 19,250 Vietnamese captions for 3,850 images. Following that, we evaluate our dataset on deep neural network models and do comparisons with English dataset and two Vietnamese datasets built by different methods. UIT-ViIC is published on our lab website for research purposes. | {
"paragraphs": [
[
"Generating descriptions for multimedia contents such as images and videos, so called Image Captioning, is helpful for e-commerce companies or news agencies. For instance, in e-commerce field, people will no longer need to put much effort into understanding and describing products' images on their websites because image contents can be recognized and descriptions are automatically generated. Inspired by Horus BIBREF0 , Image Captioning system can also be integrated into a wearable device, which is able to capture surrounding images and generate descriptions as sound in real time to guide people with visually impaired.",
"Image Captioning has attracted attentions from researchers in recent years BIBREF1, BIBREF2, BIBREF3, and there has been promising attempts dealing with language barrier in this task by extending existed dataset captions into different languages BIBREF3, BIBREF4.",
"In this study, generating image captions in Vietnamese language is put into consideration. One straightforward approach for this task is to translate English captions into Vietnamese by human or by using machine translation tool, Google translation. With the method of translating directly from English to Vietnamese, we found that the descriptions are sometimes confusing and unnatural to native people. Moreover, image understandings are cultural dependent, as in Western, people usually have different ways to grasp images and different vocabulary choices for describing contexts. For instance, in Fig. FIGREF2, one MS-COCO English caption introduce about \"a baseball player in motion of pitching\", which makes sense and capture accurately the main activity in the image. Though it sounds sensible in English, the sentence becomes less meaningful when we try to translate it into Vietnamese. One attempt of translating the sentence is performed by Google Translation, and the result is not as expected.",
"",
"",
"Therefore, we come up with the approach of constructing a Vietnamese Image Captioning dataset with descriptions written manually by human. Composed by Vietnamese people, the sentences would be more natural and friendlier to Vietnamese users. The main resources we used from MS-COCO for our dataset are images. Besides, we consider having our dataset focus on sportball category due to several reasons:",
"By concentrating on a specific domain we are more likely to improve performance of the Image Captioning models. We expect our dataset can be used to confirm or reject this hypothesis.",
"Sportball Image Captioning can be used in certain sport applications, such as supportting journalists describing great amount of images for their articles.",
"Our primary contributions of this paper are as follows:",
"Firstly, we introduce UIT-ViIC, the first Vietnamese dataset extending MS-COCO with manually written captions for Image Captioning. UIT-ViIC is published for research purposes.",
"Secondly, we introduce our annotation tool for dataset construction, which is also published to help annotators conveniently create captions.",
"Finally, we conduct experiments to evaluate state-of-the-art models (evaluated on English dataset) on UIT-ViIC dataset, then we analyze the performance results to have insights into our corpus.",
"The structure of the paper is organized as follows. Related documents and studies are presented in Section SECREF2. UIT-ViIC dataset creation is described in Section SECREF3. Section SECREF4 describes the methods we implement. The experimental results and analysis are presented in Section SECREF5. Conclusion and future work are deduced in Section SECREF6."
],
[
"",
"We summarize in Table TABREF8 an incomplete list of published Image Captioning datasets, in English and in other languages. Several image caption datasets for English have been constructed, the representative examples are Flickr3k BIBREF5, BIBREF6; Flickr 30k BIBREF7 – an extending of Flickr3k and Microsoft COCO (Microsoft Common in Objects in Context) BIBREF8.",
"Besides, several image datasets with non-English captions have been developed. Depending on their applications, the target languages of these datasets vary, including German and French for image retrieval, Japanese for cross-lingual document retrieval BIBREF9 and image captioning BIBREF10, BIBREF3, Chinese for image tagging, captioning and retrieval BIBREF4. Each of these datasets is built on top of an existing English dataset, with MS-COCO as the most popular choice.",
"Our dataset UIT-ViIC is constructed using images from Microsoft COCO (MS-COCO). MS-COCO dataset includes more than 150,000 images, divided into three distributions: train, vailidate, test. For each image, five captions are provided independently by Amazon’s Mechanical Turk. MS-COCO is the most popular dataset for Image Captioning thanks to the MS-COCO challenge (2015) and it has a powerful evaluation server for candidates.",
"Regarding to the Vietnamese language processing, there are quite a number of research works on other tasks such as parsing, part-of-speech, named entity recognition, sentiment analysis, question answering. However, to the extent of our knowledge, there are no research publications on image captioning for Vietnamese. Therefore, we decide to build a new corpus of Vietnamese image captioning for Image Captioning research community and evaluate the state-of-the-art models on our corpus. In particular, we validate and compare the results by BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13 metrics between Neural Image Captioning (NIC) model BIBREF14, Image Captioning model from the Pytorch-tutorial BIBREF15 by Yunjey on our corpus as the pioneering results."
],
[
"This section demonstrates how we constructed our new Vietnamese dataset. The dataset consists of 3,850 images relating to sports played with balls from 2017 edition of Microsoft COCO. Similar to most Image Captioning datasets, we provide five Vietnamese captions for each image, summing up to 19,250 captions in total."
],
[
"To enhance annotation efficiency, we present a web-based application for caption annotation. Fig. FIGREF10 is the annotation screen of the application.",
"Our tool assists annotators conveniently load images into a display and store captions they created into a new dataset. With saving function, annotator can save and load written captions for reviewing purposes. Furthermore, users are able to look back their works or the others’ by searching image by image ids.",
"The tool also supports content suggestions taking advantage of existing information from MS-COCO. First, there are categories hints for each image, displaying as friendly icon. Second, original English captions are displayed if annotator feels their needs. Those content suggestions are helpful for annotators who can’t clearly understand images, especially when there are issues with images’ quality."
],
[
"In this section, we describes procedures of building our sportball Vietnamese dataset, called UIT-ViIC.",
"Our human resources for dataset construction involve five writers, whose ages are from 22-25. Being native Vietnamese residents, they are fluent in Vietnamese. All five UIT-ViIC creators first research and are trained about sports knowledge as well as the specialized vocabulary before starting to work.",
"During annotation process, there are inconsistencies and disagreements between human's understandings and the way they see images. According to Micah Hodosh et al BIBREF5, most images’ captions on Internet nowadays tend to introduce information that cannot be obtained from the image itself, such as people name, location name, time, etc. Therefore, to successfully compose meaningful descriptive captions we expect, their should be strict guidelines.",
"Inspired from MS-COCO annotation rules BIBREF16, we first sketched UIT-ViIC's guidelines for our captions:",
"Each caption must contain at least ten Vietnamese words.",
"Only describe visible activities and objects included in image.",
"Exclude name of places, streets (Chinatown, New York, etc.) and number (apartment numbers, specific time on TV, etc.)",
"Familiar English words such as laptop, TV, tennis, etc. are allowed.",
"Each caption must be a single sentence with continuous tense.",
"Personal opinion and emotion must be excluded while annotating.",
"Annotators can describe the activities and objects from different perspectives.",
"Visible “thing” objects are the only one to be described.",
"Ambiguous “stuff” objects which do not have obvious “border” are ignored.",
"In case of 10 to 15 objects which are in the same category or species, annotators do not need to include them in the caption.",
"In comparison with MS-COCO BIBREF16 data collection guidelines in terms of annotation, UIT-ViIC’s guidelines has similar rules (1, 2, 8, 9, 10) . We extend from MS-COCO’s guidelines with five new rules to our own and have modifications in the original ones.",
"In both datasets, we would like to control sentence length and focus on describing important subjects only in order to make sure that essential information is mainly included in captions. The MS-COCO threshold for sentence’s length is 8, and we raise the number to 10 for our dataset. One reason for this change is that an object in image is usually expressed in many Vietnamese words. For example, a “baseball player” in English can be translated into “vận động viên bóng chày” or “cầu thủ bóng chày”, which already accounted for a significant length of the Vietnamese sentence. In addition, captions must be single sentences with continuous tense as we expect our model’s output to capture what we are seeing in the image in a consise way.",
"On the other hand, proper name for places, streets, etc must not be mentioned in this dataset in order to avoid confusions and incorrect identification names with the same scenery for output. Besides, annotators’ personal opinion must be excluded for more meaningful captions. Vietnamese words for several English ones such as tennis, pizza, TV, etc are not existed, so annotators could use such familiar words in describing captions. For some images, the subjects are ambiguous and not descriptive which would be difficult for annotators to describe in words. That’s the reason why annotators can describe images from more than one perspective."
],
[
"After finishing constructing UIT-ViIC dataset, we have a look in statistical analysis on our corpus in this section. UIT-ViIC covers 3,850 images described by 19,250 Vietnamese captions. Sticking strictly to our annotation guidelines, the majority of our captions are at the length of 10-15 tokens. We are using the term “tokens” here as a Vietnamese word can consist of one, two or even three tokens. Therefore, to apply Vietnamese properly to Image Captioning, we present a tokenization tool - PyVI BIBREF17, which is specialized for Vietnamese language tokenization, at words level. The sentence length using token-level tokenizer and word-level tokenizer are compared and illustrated in Fig. FIGREF23, we can see there are variances there. So that, we can suggest that the tokenizer performs well enough, and we can expect our Image Captioning models to perform better with Vietnamese sentences that are tokenized, as most models perform more efficiently with captions having fewer words.",
"Table TABREF24 summarizes top three most occuring words for each part-of-speech. Our dataset vocabulary size is 1,472 word classes, including 723 nouns, 567 verbs, and 182 adjectives. It is no surprise that as our dataset is about sports with balls, the noun “bóng” (meaning “ball\") occurs most, followed by “sân” and \"cầu thủ\" (“pitch” and “athlete” respectively). We also found that the frequency of word “tennis” stands out among other adjectives, which specifies that the set covers the majority of tennis sport, followed by “bóng chày” (meaning “baseball”). Therefore, we expect our model to generate the best results for tennis images."
],
[
"Our main goal in this section is to see if Image Captioning models could learn well with Vietnamese language. To accomplish this task, we train and evaluate our dataset with two published Image Captioning models applying encoder-decoder architecture. The models we propose are Neural Image Captioning (NIC) model BIBREF14, Image Captioning model from the Pytorch-tutorial BIBREF15 by Yunjey.",
"Overall, CNN is first used for extracting image features for encoder part. The image features which are presented in vectors will be used as layers for decoding. For decoder part, RNN - LSTM are used to embed the vectors to target sentences using words/tokens provided in vocabulary."
],
[
"Model from pytorch-tutorial by Yunjey applies the baseline technique of CNN and LSTM for encoding and decoding images. Resnet-152 BIBREF18 architecture is proposed for encoder part, and we use the pretrained one on ILSVRC-2012-CLS BIBREF19 image classification dataset to tackle our current problem. LSTM is then used in this model to generate sentence word by word."
],
[
"NIC - Show and Tell uses CNN model which is currently yielding the state-of-the-art results. The model achieved 0.628 when evaluating on BLEU-1 on COCO-2014 dataset. For CNN part, we utilize VGG-16 BIBREF20 architecture pre-trained on COCO-2014 image sets with all categories. In decoding part, LSTM is not only trained to predict sentence but also to compute probability for each word to be generated. As a result, output sentence will be chosen using search algorithms to find the one that have words yielding the maximum probabilities."
],
[
"As the images in our dataset are manually annotated by human, there are mistakes including grammar, spelling or extra spaces, punctuation. Sometimes, the Vietnamese’s accent signs are placed in the wrong place due to distinct keyboard input methods. Therefore, we eliminate those common errors before working on evaluating our models."
],
[
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. The three sets are distributed into three subsets: 2,695 images for the training set, 924 images for validation set and 231 images for test set."
],
[
"To evaluate our dataset, we use metrics proposed by most authors in related works of extending Image Captioning dataset, which are BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13. BLEU and ROUGE are often used mainly for text summarization and machine translation, whereas CIDEr was designed especially for evaluating Image Captioning models."
],
[
"We do comparisons with three sportball datasets, as follows:",
"Original English (English-sportball): The original MS-COCO English dataset with 3,850 sportball images. This dataset is first evaluated in order to have base results for following comparisons.",
"Google-translated Vietnamese (GT-sportball): The translated MS-COCO English dataset into Vietnamese using Google Translation API, categorized into sportball.",
"Manually-annotated Vietnamese (UIT-ViIC): The Vietnamese dataset built with manually written captions for images from MS-COCO, categorized into sportball."
],
[
"The two following tables, Table TABREF36 and Table TABREF36, summarize experimental results of Pytorch-tutorial, NIC - Show and Tell models. The two models are trained with three mentioned datasets, which are English-sportball, GT-sportball, UIT-ViIC. After training, 924 images from validation subset for each dataset are used to validate the our models.",
"As can be seen in Table TABREF36, with model from Pytorch tutorial, MS-COCO English captions categorized with sportball yields better results than the two Vietnamese datasets. However, as number of consecutive words considered (BLEU gram) increase, UIT-ViIC’s BLEU scores start to pass that of English sportball and their gaps keep growing. The ROUGE-L and CIDEr-D scores for UIT-ViIC model prove the same thing, and interestingly, we can observe that the CIDEr-D score for the UIT-ViIC model surpasses English-sportball counterpart.",
"The same conclusion can be said from Table TABREF36. Show and Tell model’s results show that MS-COCO sportball English captions only gives better result at BLEU-1. From BLEU-3 to BLEU-4, both GT-sportball and UIT-ViIC yield superior scores to English-sportball. Besides, when limiting MS-COCO English dataset to sportball category only, the results are higher (0.689, 0.501, 0.355, 0.252) than when the model is trained on MS-COCO with all images, which scored only 0.629, 0.436, 0.290, 0.193 (results without tuning in 2018) from BLEU-1 to BLEU-4 respectively.",
"When we compare between two Vietnamese datasets, UIT-ViIC models perform better than sportball dataset translated automatically, GT-sportball. The gaps between the two results sets are more trivial in NIC model, and the numbers get smaller as the BLEU’s n-gram increase.",
"In Fig. FIGREF37, two images inputted into the models generate two Vietnamese captions that are able to describe accurately the sport game, which is soccer. The two models can also differentiate if there is more than one person in the images. However, when comparing GT-sportball outputs with UIT-ViIC ones in both images, UIT-ViIC yield captions that sound more naturally, considering Vietnamese language. Furthermore, UIT-ViIC demonstrates the specific action of the sport more accurately than GT-sportball. For example, in the below image of Fig. FIGREF37, UIT-ViIC tells the exact action (the man is preparing to throw the ball), whereas GT-sportball is mistaken (the man swing the bat). The confusion of GT-sportball happens due to GT-sportball train set is translated from original MS-COCO dataset, which is annotated in more various perspective and wider vocabulary range with the dataset size is not big enough.",
"There are cases when the main objects are too small, both English and GT - sportball captions tell the unexpected sport, which is tennis instead of baseball, for instance. Nevertheless, the majority of UIT-ViIC captions can tell the correct type of sport and action, even though the gender and age identifications still need to be improved."
],
[
"In this paper, we constructed a Vietnamese dataset with images from MS-COCO, relating to the category within sportball, consisting of 3,850 images with 19,250 manually-written Vietnamese captions. Next, we conducted several experiments on two popular existed Image Captioning models to evaluate their efficiency when learning two Vietnamese datasets. The results are then compared with the original MS-COCO English categorized with sportball category.",
"Overall, we can see that English set only out-performed Vietnamese ones in BLEU-1 metric, rather, the Vietnamese sets performing well basing on BLEU-2 to BLEU-4, especially CIDEr scores. On the other hand, when UIT-ViIC is compared with the dataset having captions translated by Google, the evaluation results and the output examples suggest that Google Translation service is able to perform acceptablly even though most translated captions are not perfectly natural and linguistically friendly. As a results, we proved that manually written captions for Vietnamese dataset is currently prefered.",
"For future improvements, extending the UIT-ViIC's cateogry into all types of sport to verify how the dataset's size and category affect the Image Captioning models' performance is considered as our highest priority. Moreover, the human resources for dataset construction will be expanded. Second, we will continue to finetune our experiments to find out proper parameters for models, especially with encoding and decoding architectures, for better learning performance with Vietnamese dataset, especially when the categories are limited."
]
],
"section_name": [
"Introduction",
"Related Works",
"Dataset Creation",
"Dataset Creation ::: Annotation Tool with Content Suggestions",
"Dataset Creation ::: Annotation Process",
"Dataset Creation ::: Dataset Analysis",
"Image Captioning Models",
"Image Captioning Models ::: Model from Pytorch tutorial",
"Image Captioning Models ::: NIC - Show and tell model",
"Experiments ::: Experiment Settings ::: Dataset preprocessing",
"Experiments ::: Experiment Settings ::: Dataset preparation",
"Experiments ::: Evaluation Measures",
"Experiments ::: Evaluation Measures ::: Comparison methods",
"Experiments ::: Experiment Results",
"Conclusion and Further Improvements"
]
} | {
"answers": [
{
"annotation_id": [
"50580b1cbf30578352a310af87141ea7326df3f6",
"de19fb8e2bd581858fd4399020e7aa3224971599"
],
"answer": [
{
"evidence": [
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. The three sets are distributed into three subsets: 2,695 images for the training set, 924 images for validation set and 231 images for test set."
],
"extractive_spans": [],
"free_form_answer": "MS-COCO dataset translated to Vietnamese using Google Translate and through human annotation",
"highlighted_evidence": [
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. The three sets are distributed into three subsets: 2,695 images for the training set, 924 images for validation set and 231 images for test set."
],
"extractive_spans": [
"datasets generated by two methods (translated by Google Translation service and annotated by human)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"857f538be75d60f06bb2b913f68e656c9baf72e4",
"883ed001176e97055b8dde6fbad08e488d163db3"
],
"answer": [
{
"evidence": [
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. The three sets are distributed into three subsets: 2,695 images for the training set, 924 images for validation set and 231 images for test set."
],
"extractive_spans": [
"the original MS-COCO English dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. The three sets are distributed into three subsets: 2,695 images for the training set, 924 images for validation set and 231 images for test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We do comparisons with three sportball datasets, as follows:",
"Original English (English-sportball): The original MS-COCO English dataset with 3,850 sportball images. This dataset is first evaluated in order to have base results for following comparisons."
],
"extractive_spans": [
"MS-COCO"
],
"free_form_answer": "",
"highlighted_evidence": [
"We do comparisons with three sportball datasets, as follows:\n\nOriginal English (English-sportball): The original MS-COCO English dataset with 3,850 sportball images. This dataset is first evaluated in order to have base results for following comparisons."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"bfc0904081e2a24d6fdfc58a41b082ef1c158cc7",
"ed9dc5856c637db1a3b862ed50ac361afb260fd8"
],
"answer": [
{
"evidence": [
"Overall, CNN is first used for extracting image features for encoder part. The image features which are presented in vectors will be used as layers for decoding. For decoder part, RNN - LSTM are used to embed the vectors to target sentences using words/tokens provided in vocabulary."
],
"extractive_spans": [
"CNN ",
"RNN - LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"Overall, CNN is first used for extracting image features for encoder part. The image features which are presented in vectors will be used as layers for decoding. For decoder part, RNN - LSTM are used to embed the vectors to target sentences using words/tokens provided in vocabulary."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Regarding to the Vietnamese language processing, there are quite a number of research works on other tasks such as parsing, part-of-speech, named entity recognition, sentiment analysis, question answering. However, to the extent of our knowledge, there are no research publications on image captioning for Vietnamese. Therefore, we decide to build a new corpus of Vietnamese image captioning for Image Captioning research community and evaluate the state-of-the-art models on our corpus. In particular, we validate and compare the results by BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13 metrics between Neural Image Captioning (NIC) model BIBREF14, Image Captioning model from the Pytorch-tutorial BIBREF15 by Yunjey on our corpus as the pioneering results."
],
"extractive_spans": [
"Neural Image Captioning (NIC) model BIBREF14",
"Image Captioning model from the Pytorch-tutorial BIBREF15 by Yunjey"
],
"free_form_answer": "",
"highlighted_evidence": [
" In particular, we validate and compare the results by BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13 metrics between Neural Image Captioning (NIC) model BIBREF14, Image Captioning model from the Pytorch-tutorial BIBREF15 by Yunjey on our corpus as the pioneering results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3727ffb8383ceb149f9a0e3b19f7b9bbfc02dae9",
"fcd4b7e51ffa5c548763b24d11deb36259954c8e"
],
"answer": [
{
"evidence": [
"Therefore, we come up with the approach of constructing a Vietnamese Image Captioning dataset with descriptions written manually by human. Composed by Vietnamese people, the sentences would be more natural and friendlier to Vietnamese users. The main resources we used from MS-COCO for our dataset are images. Besides, we consider having our dataset focus on sportball category due to several reasons:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, we come up with the approach of constructing a Vietnamese Image Captioning dataset with descriptions written manually by human. Composed by Vietnamese people, the sentences would be more natural and friendlier to Vietnamese users."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Our human resources for dataset construction involve five writers, whose ages are from 22-25. Being native Vietnamese residents, they are fluent in Vietnamese. All five UIT-ViIC creators first research and are trained about sports knowledge as well as the specialized vocabulary before starting to work."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our human resources for dataset construction involve five writers, whose ages are from 22-25. Being native Vietnamese residents, they are fluent in Vietnamese. All five UIT-ViIC creators first research and are trained about sports knowledge as well as the specialized vocabulary before starting to work."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"79dbb399cfcc7b0c69d1a084c552fb03eae933f3",
"e9221c3aec4ee8ac48d558f7348812004dd765b3"
],
"answer": [
{
"evidence": [
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. The three sets are distributed into three subsets: 2,695 images for the training set, 924 images for validation set and 231 images for test set."
],
"extractive_spans": [],
"free_form_answer": "Translation and annotation.",
"highlighted_evidence": [
" Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this study, generating image captions in Vietnamese language is put into consideration. One straightforward approach for this task is to translate English captions into Vietnamese by human or by using machine translation tool, Google translation. With the method of translating directly from English to Vietnamese, we found that the descriptions are sometimes confusing and unnatural to native people. Moreover, image understandings are cultural dependent, as in Western, people usually have different ways to grasp images and different vocabulary choices for describing contexts. For instance, in Fig. FIGREF2, one MS-COCO English caption introduce about \"a baseball player in motion of pitching\", which makes sense and capture accurately the main activity in the image. Though it sounds sensible in English, the sentence becomes less meaningful when we try to translate it into Vietnamese. One attempt of translating the sentence is performed by Google Translation, and the result is not as expected.",
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. The three sets are distributed into three subsets: 2,695 images for the training set, 924 images for validation set and 231 images for test set."
],
"extractive_spans": [],
"free_form_answer": "human translation and Google Translation service",
"highlighted_evidence": [
"In this study, generating image captions in Vietnamese language is put into consideration. One straightforward approach for this task is to translate English captions into Vietnamese by human or by using machine translation tool, Google translation. ",
"We conduct our experiments and do comparisons through three datasets with the same size and images of sportball category: Two Vietnamese datasets generated by two methods (translated by Google Translation service and annotated by human) and the original MS-COCO English dataset. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"366b3bcf05b371635788b96f4387863ee0af015c",
"aa7c5935bd8a794ae46f2865b4f5ab412ffe87d2"
],
"answer": [
{
"evidence": [
"Our main goal in this section is to see if Image Captioning models could learn well with Vietnamese language. To accomplish this task, we train and evaluate our dataset with two published Image Captioning models applying encoder-decoder architecture. The models we propose are Neural Image Captioning (NIC) model BIBREF14, Image Captioning model from the Pytorch-tutorial BIBREF15 by Yunjey.",
"Overall, CNN is first used for extracting image features for encoder part. The image features which are presented in vectors will be used as layers for decoding. For decoder part, RNN - LSTM are used to embed the vectors to target sentences using words/tokens provided in vocabulary."
],
"extractive_spans": [],
"free_form_answer": "encoder-decoder architecture of CNN for encoding and LSTM for decoding",
"highlighted_evidence": [
"To accomplish this task, we train and evaluate our dataset with two published Image Captioning models applying encoder-decoder architecture. The models we propose are Neural Image Captioning (NIC) model BIBREF14, Image Captioning model from the Pytorch-tutorial BIBREF15 by Yunjey.\n\nOverall, CNN is first used for extracting image features for encoder part. The image features which are presented in vectors will be used as layers for decoding. For decoder part, RNN - LSTM are used to embed the vectors to target sentences using words/tokens provided in vocabulary."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Overall, CNN is first used for extracting image features for encoder part. The image features which are presented in vectors will be used as layers for decoding. For decoder part, RNN - LSTM are used to embed the vectors to target sentences using words/tokens provided in vocabulary."
],
"extractive_spans": [
"CNN",
"RNN - LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"Overall, CNN is first used for extracting image features for encoder part. The image features which are presented in vectors will be used as layers for decoding. For decoder part, RNN - LSTM are used to embed the vectors to target sentences using words/tokens provided in vocabulary."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"10110e9871563761a883762a8e3521577304a7d8",
"5e7a07a87f9ff463d8e472ae9b30cccc5a5f68b8"
],
"answer": [
{
"evidence": [
"The two following tables, Table TABREF36 and Table TABREF36, summarize experimental results of Pytorch-tutorial, NIC - Show and Tell models. The two models are trained with three mentioned datasets, which are English-sportball, GT-sportball, UIT-ViIC. After training, 924 images from validation subset for each dataset are used to validate the our models.",
"To evaluate our dataset, we use metrics proposed by most authors in related works of extending Image Captioning dataset, which are BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13. BLEU and ROUGE are often used mainly for text summarization and machine translation, whereas CIDEr was designed especially for evaluating Image Captioning models."
],
"extractive_spans": [],
"free_form_answer": " The two models are trained with three mentioned datasets, then validated on subset for each dataset and evaluated using BLEU, ROUGE and CIDEr measures.",
"highlighted_evidence": [
"The two models are trained with three mentioned datasets, which are English-sportball, GT-sportball, UIT-ViIC. After training, 924 images from validation subset for each dataset are used to validate the our models.",
"To evaluate our dataset, we use metrics proposed by most authors in related works of extending Image Captioning dataset, which are BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate our dataset, we use metrics proposed by most authors in related works of extending Image Captioning dataset, which are BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13. BLEU and ROUGE are often used mainly for text summarization and machine translation, whereas CIDEr was designed especially for evaluating Image Captioning models.",
"The two following tables, Table TABREF36 and Table TABREF36, summarize experimental results of Pytorch-tutorial, NIC - Show and Tell models. The two models are trained with three mentioned datasets, which are English-sportball, GT-sportball, UIT-ViIC. After training, 924 images from validation subset for each dataset are used to validate the our models."
],
"extractive_spans": [],
"free_form_answer": "They evaluate on three metrics BLUE, ROUGE and CIDEr trained on the mentioned datasets.",
"highlighted_evidence": [
"To evaluate our dataset, we use metrics proposed by most authors in related works of extending Image Captioning dataset, which are BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13. BLEU and ROUGE are often used mainly for text summarization and machine translation, whereas CIDEr was designed especially for evaluating Image Captioning models.",
"The two following tables, Table TABREF36 and Table TABREF36, summarize experimental results of Pytorch-tutorial, NIC - Show and Tell models. The two models are trained with three mentioned datasets, which are English-sportball, GT-sportball, UIT-ViIC. After training, 924 images from validation subset for each dataset are used to validate the our models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What are the other two Vietnamese datasets?",
"Which English dataset do they evaluate on?",
"What neural network models do they use in their evaluation?",
"Do they use crowdsourcing for the captions?",
"What methods are used to build two other Viatnamese datsets?",
"What deep neural network models are used in evaluation?",
"How authors evaluate datasets using models trained on different datasets?"
],
"question_id": [
"23252644c04a043f630a855b563666dd57179d98",
"2f75b0498cf6a1fc35f1fb1cac44fc2fbd3d7878",
"0d3193d17c0a4edc8fa9854f279c2a1b878e8b29",
"b424ad7f9214076b963a0077d7345d7bb5a7a205",
"0dfe43985dea45d93ae2504cccca15ae1e207ccf",
"8276671a4d4d1fbc097cd4a4b7f5e7fadd7b9833",
"79885526713cc16eb734c88ff1169ae802cad589"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"computer vision",
"computer vision",
"computer vision"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Example of MS-COCO English caption compare to Google Translated caption",
"Table 1. Public image datasets with manually annotated, non-English descriptions",
"Fig. 2. Examples of User Interfaces for Image Captions Annotations.",
"Table 2. Statistics on types of Vietnamese words.",
"Fig. 3. Dataset Analysis by Sentences Length.",
"Table 3. Experimental results of pytorch-tutorial models",
"Table 4. Experimental results of NIC - Show and Tell models",
"Fig. 4. Examples of captions generated by models from pytorch-tutorial trained on the three datasets that yieled expected outputs."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"5-Figure2-1.png",
"6-Table2-1.png",
"7-Figure3-1.png",
"9-Table3-1.png",
"9-Table4-1.png",
"10-Figure4-1.png"
]
} | [
"What are the other two Vietnamese datasets?",
"What methods are used to build two other Viatnamese datsets?",
"What deep neural network models are used in evaluation?",
"How authors evaluate datasets using models trained on different datasets?"
] | [
[
"2002.00175-Experiments ::: Experiment Settings ::: Dataset preparation-0"
],
[
"2002.00175-Introduction-2",
"2002.00175-Experiments ::: Experiment Settings ::: Dataset preparation-0"
],
[
"2002.00175-Image Captioning Models-1",
"2002.00175-Image Captioning Models-0"
],
[
"2002.00175-Experiments ::: Evaluation Measures-0",
"2002.00175-Experiments ::: Experiment Results-0"
]
] | [
"MS-COCO dataset translated to Vietnamese using Google Translate and through human annotation",
"human translation and Google Translation service",
"encoder-decoder architecture of CNN for encoding and LSTM for decoding",
"They evaluate on three metrics BLUE, ROUGE and CIDEr trained on the mentioned datasets."
] | 314 |
1901.09381 | Dual Co-Matching Network for Multi-choice Reading Comprehension | Multi-choice reading comprehension is a challenging task that requires complex reasoning procedure. Given passage and question, a correct answer need to be selected from a set of candidate answers. In this paper, we propose \textbf{D}ual \textbf{C}o-\textbf{M}atching \textbf{N}etwork (\textbf{DCMN}) which model the relationship among passage, question and answer bidirectionally. Different from existing approaches which only calculate question-aware or option-aware passage representation, we calculate passage-aware question representation and passage-aware answer representation at the same time. To demonstrate the effectiveness of our model, we evaluate our model on a large-scale multiple choice machine reading comprehension dataset (i.e. RACE). Experimental result show that our proposed model achieves new state-of-the-art results. | {
"paragraphs": [
[
"Machine reading comprehension and question answering has becomes a crucial application problem in evaluating the progress of AI system in the realm of natural language processing and understanding BIBREF0 . The computational linguistics communities have devoted significant attention to the general problem of machine reading comprehension and question answering.",
"However, most of existing reading comprehension tasks only focus on shallow QA tasks that can be tackled very effectively by existing retrieval-based techniques BIBREF1 . For example, recently we have seen increased interest in constructing extractive machine reading comprehension datasets such as SQuAD BIBREF2 and NewsQA BIBREF3 . Given a document and a question, the expected answer is a short span in the document. Question context usually contains sufficient information for identifying evidence sentences that entail question-answer pairs. For example, 90.2% questions in SQuAD reported by Min BIBREF4 are answerable from the content of a single sentence. Even in some multi-turn conversation tasks, the existing models BIBREF5 mostly focus on retrieval-based response matching.",
"In this paper, we focus on multiple-choice reading comprehension datasets such as RACE BIBREF6 in which each question comes with a set of answer options. The correct answer for most questions may not appear in the original passage which makes the task more challenging and allow a rich type of questions such as passage summarization and attitude analysis. This requires a more in-depth understanding of a single document and leverage external world knowledge to answer these questions. Besides, comparing to traditional reading comprehension problem, we need to fully consider passage-question-answer triplets instead of passage-question pairwise matching.",
"In this paper, we propose a new model, Dual Co-Matching Network, to match a question-answer pair to a given passage bidirectionally. Our network leverages the latest breakthrough in NLP: BERT BIBREF7 contextual embedding. In the origin BERT paper, the final hidden vector corresponding to first input token ([CLS]) is used as the aggregation representation and then a standard classification loss is computed with a classification layer. We think this method is too rough to handle the passage-question-answer triplet because it only roughly concatenates the passage and question as the first sequence and uses question as the second sequence, without considering the relationship between the question and the passage. So we propose a new method to model the relationship among the passage, the question and the candidate answer.",
"Firstly we use BERT as our encode layer to get the contextual representation of the passage, question, answer options respectively. Then a matching layer is constructed to get the passage-question-answer triplet matching representation which encodes the locational information of the question and the candidate answer matched to a specific context of the passage. Finally we apply a hierarchical aggregation method over the matching representation from word-level to sequence-level and then from sequence level to document-level. Our model improves the state-of-the-art model by 2.6 percentage on the RACE dataset with BERT base model and further improves the result by 3 percentage with BERT large model."
],
[
"For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers. The goal is to select the correct answer from the candidates. P, Q, and A are used to represent the passage, the question and a candidate answer respectively. For each candidate answer, our model constructs a question-aware passage representation, a question-aware passage representation and a question-aware passage representation. After a max-pooling layer, the three representations are concatenated as the final representation of the candidate answer. The representations of all candidate answers are then used for answer selection.",
"In section \"Encoding layer\" , we introduce the encoding mechanism. Then in section \"Conclusions\" , we introduce the calculation procedure of the matching representation between the passage, the question and the candidate answer. In section \"Aggregation layer\" , we introduce the aggregation method and the objective function."
],
[
"This layer encodes each token in passage and question into a fixed-length vector including both word embedding and contextualized embedding. We utilize the latest result from BERT BIBREF7 as our encoder and the final hidden state of BERT is used as our final embedding. In the origin BERT BIBREF7 , the procedure of processing multi-choice problem is that the final hidden vector corresponding to first input token ([CLS]) is used as the aggregation representation of the passage, the question and the candidate answer, which we think is too simple and too rough. So we encode the passage, the question and the candidate answer respectively as follows: ",
"$$\\begin{split}\n\\textbf {H}^p=&BERT(\\textbf {P}),\\textbf {H}^q=BERT(\\textbf {Q}) \\\\\n&\\textbf {H}^a=BERT(\\textbf {A})\n\\end{split}$$ (Eq. 3) ",
"where $\\textbf {H}^p \\in R^{P \\times l}$ , $\\textbf {H}^q \\in R^{Q \\times l}$ and $\\textbf {H}^a \\in R^{A \\times l}$ are sequences of hidden state generated by BERT. $P$ , $Q$ , $A$ are the sequence length of the passage, the question and the candidate answer respectively. $l$ is the dimension of the BERT hidden state."
],
[
"To fully mine the information in a {P, Q, A} triplet , We make use of the attention mechanism to get the bi-directional aggregation representation between the passage and the answer and do the same process between the passage and the question. The attention vectors between the passage and the answer are calculated as follows: ",
"$$\\begin{split}\n\\textbf {W}&=SoftMax(\\textbf {H}^p({H^{a}G + b})^T), \\\\\n\\textbf {M}^{p}&=\\textbf {W}\\textbf {H}^{a},\n\\textbf {M}^{a}=\\textbf {W}^T\\textbf {H}^{p},\n\\end{split}$$ (Eq. 5) ",
"where $G \\in R^{l \\times l}$ and $b \\in R^{A \\times l}$ are the parameters to learn. $\\textbf {W} \\in R^{P \\times A}$ is the attention weight matrix between the passage and the answer. $\\textbf {M}^{p} \\in R^{P \\times l}$ represent how each hidden state in passage can be aligned to the answe rand $\\textbf {M}^{a} \\in R^{A \\times l}$ represent how the candidate answer can be aligned to each hidden state in passage. In the same method, we can get $\\textbf {W}^{\\prime } \\in R^{P \\times Q}$ and $\\textbf {M}^{q} \\in R^{Q \\times l}$ for the representation between the passage and the question.",
"To integrate the original contextual representation, we follow the idea from BIBREF8 to fuse $\\textbf {M}^{a}$ with original $\\textbf {H}^p$ and so is $\\textbf {M}^{p}$ . The final representation of passage and the candidate answer is calculated as follows: ",
"$$\\begin{split}\n\\textbf {S}^{p}&=F([\\textbf {M}^{a} - \\textbf {H}^{a}; \\textbf {M}^{a} \\cdot \\textbf {H}^{a}]W_1 + b_1),\\\\\n\\textbf {S}^{a}&=F([\\textbf {M}^{p} - \\textbf {H}^{p}; \\textbf {M}^{p} \\cdot \\textbf {H}^{p}]W_2 + b_2),\\\\\n\\end{split}$$ (Eq. 6) ",
"where $W_1, W_2 \\in R^{2l \\times l}$ and $b_1 \\in R^{P \\times l}, b_2 \\in R^{(A) \\times l}$ are the parameters to learn. $[ ; ]$ is the column-wise concatenation and $-, \\cdot $ are the element-wise subtraction and multiplication between two matrices. Previous work in BIBREF9 , BIBREF10 shows this method can build better matching representation. $F$ is the activation function and we choose $ReLU$ activation function there. $\\textbf {S}^{p} \\in R^{P \\times l}$ and $\\textbf {S}^{a} \\in R^{A \\times l}$ are the final representations of the passage and candidate answer. In the question side, we can get $\\textbf {S}^{p^{\\prime }} \\in R^{P \\times l}$ and $\\textbf {S}^{q} \\in R^{Q \\times l}$ in the same calculation method."
],
[
"To get the final representation for each candidate answer, a row-wise max pooling operation is used to $\\textbf {S}^{p}$ and $\\textbf {S}^{a}$ . Then we get $\\textbf {C}^{p} \\in R^l$ and $\\textbf {C}^{a} \\in R^l$ respectively. In the question side, $\\textbf {C}^{p^{\\prime }} \\in R^l$ and $\\textbf {C}^{q} \\in R^l$ are calculated. Finally, we concatenate all of them as the final output $\\textbf {C} \\in R^{4l}$ for each {P, Q, A} triplet. ",
"$$\\begin{split}\n\\textbf {C}^{p} = &Pooling(\\textbf {S}^{p}),\n\\textbf {C}^{a} = Pooling(\\textbf {S}^{a}),\\\\\n\\textbf {C}^{p^{\\prime }} = &Pooling(\\textbf {S}^{p^{\\prime }}),\n\\textbf {C}^{q} = Pooling(\\textbf {S}^{q}),\\\\\n\\textbf {C} &= [\\textbf {C}^{p}; \\textbf {C}^{a};\\textbf {C}^{p^{\\prime }};\\textbf {C}^{q}]\n\\end{split}$$ (Eq. 9) ",
"For each candidate answer choice $i$ , its matching representation with the passage and question can be represented as $\\textbf {C}_i$ . Then our loss function is computed as follows: ",
"$$\\begin{split}\nL(\\textbf {A}_i|\\textbf {P,Q}) = -log{\\frac{exp(V^T\\textbf {C}_i)}{\\sum _{j=1}^N{exp(V^T\\textbf {C}_j)}}},\n\\end{split}$$ (Eq. 10) ",
"where $V \\in R^l$ is a parameter to learn."
],
[
"We evaluate our model on RACE dataset BIBREF6 , which consists of two subsets: RACE-M and RACE-H. RACE-M comes from middle school examinations while RACE-H comes from high school examinations. RACE is the combination of the two.",
"We compare our model with the following baselines: MRU(Multi-range Reasoning) BIBREF12 , DFN(Dynamic Fusion Networks) BIBREF11 , HCM(Hierarchical Co-Matching) BIBREF8 , OFT(OpenAI Finetuned Transformer LM) BIBREF13 , RSM(Reading Strategies Model) BIBREF14 . We also compare our model with the BERT baseline and implement the method described in the original paper BIBREF7 , which uses the final hidden vector corresponding to the first input token ([CLS]) as the aggregate representation followed by a classification layer and finally a standard classification loss is computed.",
"Results are shown in Table 2 . We can see that the performance of BERT $_{base}$ is very close to the previous state-of-the-art and BERT $_{large}$ even outperforms it for 3.7%. But experimental result shows that our model is more powerful and we further improve the result for 2.2% computed to BERT $_{base}$ and 2.2% computed to BERT $_{large}$ ."
],
[
"In this paper, we propose a Dual Co-Matching Network, DCMN, to model the relationship among the passage, question and the candidate answer bidirectionally. By incorporating the latest breakthrough, BERT, in an innovative way, our model achieves the new state-of-the-art in RACE dataset, outperforming the previous state-of-the-art model by 2.2% in RACE full dataset."
]
],
"section_name": [
"Introduction",
"Model",
"Encoding layer",
"Matching layer",
"Aggregation layer",
"Experiment",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"10179673f79e1597daeaf55c298c2e9af730c7d5",
"19410fe649f090bb89301c8b3392b11915e81031"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "Yes, they also evaluate on the ROCStories\n(Spring 2016) dataset which collects 50k five sentence commonsense stories. ",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our model on RACE dataset BIBREF6 , which consists of two subsets: RACE-M and RACE-H. RACE-M comes from middle school examinations while RACE-H comes from high school examinations. RACE is the combination of the two."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our model on RACE dataset BIBREF6 , which consists of two subsets: RACE-M and RACE-H."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"e28afbb32f69deca0d83ecdc074a696d08f31e47",
"ecbdc8a6fe79b519eabf615391c3009cea4ece77"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Experiment results on RACE test set. All the results are from single models. PSS : Passage Sentence Selection; AOI : Answer Option Interaction. ∗ indicates our implementation."
],
"extractive_spans": [],
"free_form_answer": "Model's performance ranges from 67.0% to 82.8%.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Experiment results on RACE test set. All the results are from single models. PSS : Passage Sentence Selection; AOI : Answer Option Interaction. ∗ indicates our implementation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: Experiment results on RACE test set. All the results are from single models. PSS : Passage Sentence Selection; AOI : Answer Option Interaction. ∗ indicates our implementation."
],
"extractive_spans": [],
"free_form_answer": "67% using BERT_base, 74.1% using BERT_large, 75.8% using BERT_large, Passage, and Answer, and 82.8% using XLNET_large with Passage and Answer features",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Experiment results on RACE test set. All the results are from single models. PSS : Passage Sentence Selection; AOI : Answer Option Interaction. ∗ indicates our implementation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Do they evaluate their model on datasets other than RACE?",
"What is their model's performance on RACE?"
],
"question_id": [
"0871827cfeceed4ee78ce7407aaf6e85dd1f9c25",
"240058371e91c6b9509c0398cbe900855b46c328"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"reading comprehension",
"reading comprehension"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: An example passage with related question and options from RACE dataset. The ground-truth answer and the evidence sentences in the passage are in bold.",
"Figure 1: The framework of our model. P-Passage, Q-Question, O-Option.",
"Table 2: Analysis of the sentences in passage required to answer questions on RACE and COIN. 50 examples from each dataset are sampled randomly. N sent indicates the number of sentences required to answer the question. The evidence sentences in the passage are in emphasis and the correct answer is with bold.",
"Table 3: Statistics of multi-choice machine reading comprehension datasets. #o is the average number of candidate options for each question. #p is the number of documents included in the dataset. #q indicates the total number of questions in the dataset.",
"Table 4: Experiment results on RACE test set. All the results are from single models. PSS : Passage Sentence Selection; AOI : Answer Option Interaction. ∗ indicates our implementation.",
"Table 5: Ablation study on RACE dev set. PSS : Passage Sentence Selection. AOI : Answer Option Interaction. DCMN+: DCMN + PSS + AOI",
"Table 6: Results on the test set of SemEval Task 11, ROCStories, MCTest and the development set of COIN Task 1. The test set of COIN is not public. DCMN+: DCMN + PSS + AOI . Previous SOTA: previous state-of-the-art model. All the results are from single models.",
"Table 7: Performance comparison with different combination methods on the RACE dev set4. We use BERTbase as our encoder here. [; ] indicates the concatenation operation. SP O is the unidirectional matching referred in Eq. 24. MP O is the bidirectional matching representation referred in Eq. 27. Here uses our annotations to show previous matching strategies.",
"Figure 2: Results of sentence selection on the development set of RACE and COIN when selecting different number of sentences (Top K). We use BERTbase as encoder and cosine score method here. RACE/COIN-w indicates the results on RACE/COIN without sentence selection module.",
"Table 8: Results on RACE and COIN dev set with different scoring methods (cosine and bilinear score in PSS). We use BERTbase as encoder here.",
"Figure 3: Performance on different question types, tested on the RACE development set. BERTlarge is used as encoder here. OI: Answer Option Interaction. SS: Passage Sentence Selection."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"8-Figure2-1.png",
"8-Table8-1.png",
"9-Figure3-1.png"
]
} | [
"Do they evaluate their model on datasets other than RACE?",
"What is their model's performance on RACE?"
] | [
[
"1901.09381-Experiment-0"
],
[
"1901.09381-6-Table4-1.png"
]
] | [
"Yes, they also evaluate on the ROCStories\n(Spring 2016) dataset which collects 50k five sentence commonsense stories. ",
"67% using BERT_base, 74.1% using BERT_large, 75.8% using BERT_large, Passage, and Answer, and 82.8% using XLNET_large with Passage and Answer features"
] | 315 |
1906.01615 | Sequential Neural Networks as Automata | This work attempts to explain the types of computation that neural networks can perform by relating them to automata. We first define what it means for a real-time network with bounded precision to accept a language. A measure of network memory follows from this definition. We then characterize the classes of languages acceptable by various recurrent networks, attention, and convolutional networks. We find that LSTMs function like counter machines and relate convolutional networks to the subregular hierarchy. Overall, this work attempts to increase our understanding and ability to interpret neural networks through the lens of theory. These theoretical insights help explain neural computation, as well as the relationship between neural networks and natural language grammar. | {
"paragraphs": [
[
"In recent years, neural networks have achieved tremendous success on a variety of natural language processing (NLP) tasks. Neural networks employ continuous distributed representations of linguistic data, which contrast with classical discrete methods. While neural methods work well, one of the downsides of the distributed representations that they utilize is interpretability. It is hard to tell what kinds of computation a model is capable of, and when a model is working, it is hard to tell what it is doing.",
"This work aims to address such issues of interpretability by relating sequential neural networks to forms of computation that are more well understood. In theoretical computer science, the computational capacities of many different kinds of automata formalisms are clearly established. Moreover, the Chomsky hierarchy links natural language to such automata-theoretic languages BIBREF0 . Thus, relating neural networks to automata both yields insight into what general forms of computation such models can perform, as well as how such computation relates to natural language grammar.",
"Recent work has begun to investigate what kinds of automata-theoretic computations various types of neural networks can simulate. BIBREF1 propose a connection between long short-term memory networks (LSTMs) and counter automata. They provide a construction by which the LSTM can simulate a simplified variant of a counter automaton. They also demonstrate that LSTMs can learn to increment and decrement their cell state as counters in practice. BIBREF2 , on the other hand, describe a connection between the gating mechanisms of several recurrent neural network (RNN) architectures and weighted finite-state acceptors.",
"This paper follows BIBREF1 by analyzing the expressiveness of neural network acceptors under asymptotic conditions. We formalize asymptotic language acceptance, as well as an associated notion of network memory. We use this theory to derive computation upper bounds and automata-theoretic characterizations for several different kinds of recurrent neural networks section:rnns, as well as other architectural variants like attention section:attention and convolutional networks (CNNs) section:cnns. This leads to a fairly complete automata-theoretic characterization of sequential neural networks.",
"In section:experiments, we report empirical results investigating how well these asymptotic predictions describe networks with continuous activations learned by gradient descent. In some cases, networks behave according to the theoretical predictions, but we also find cases where there is gap between the asymptotic characterization and actual network behavior.",
"Still, discretizing neural networks using an asymptotic analysis builds intuition about how the network computes. Thus, this work provides insight about the types of computations that sequential neural networks can perform through the lens of formal language theory. In so doing, we can also compare the notions of grammar expressible by neural networks to formal models that have been proposed for natural language grammar."
],
[
"To investigate the capacities of different neural network architectures, we need to first define what it means for a neural network to accept a language. There are a variety of ways to formalize language acceptance, and changes to this definition lead to dramatically different characterizations.",
"In their analysis of RNN expressiveness, BIBREF3 allow RNNs to perform an unbounded number of recurrent steps even after the input has been consumed. Furthermore, they assume that the hidden units of the network can have arbitrarily fine-grained precision. Under this very general definition of language acceptance, BIBREF3 found that even a simple recurrent network (SRN) can simulate a Turing machine.",
"We want to impose the following constraints on neural network computation, which are more realistic to how networks are trained in practice BIBREF1 :",
"Informally, a neural sequence acceptor is a network which reads a variable-length sequence of characters and returns the probability that the input sequence is a valid sentence in some formal language. More precisely, we can write:",
"[Neural sequence acceptor] Let INLINEFORM0 be a matrix representation of a sentence where each row is a one-hot vector over an alphabet INLINEFORM1 . A neural sequence acceptor INLINEFORM2 is a family of functions parameterized by weights INLINEFORM3 . For each INLINEFORM4 and INLINEFORM5 , the function INLINEFORM6 takes the form INLINEFORM7 ",
"In this definition, INLINEFORM0 corresponds to a general architecture like an LSTM, whereas INLINEFORM1 represents a specific network, such as an LSTM with weights that have been learned from data.",
"In order to get an acceptance decision from this kind of network, we will consider what happens as the magnitude of its parameters gets very large. Under these asymptotic conditions, the internal connections of the network approach a discrete computation graph, and the probabilistic output approaches the indicator function of some language fig:acceptanceexample.",
"[Asymptotic acceptance] Let INLINEFORM0 be a language with indicator function INLINEFORM1 . A neural sequence acceptor INLINEFORM2 with weights INLINEFORM3 asymptotically accepts INLINEFORM4 if INLINEFORM5 ",
"Note that the limit of INLINEFORM0 represents the function that INLINEFORM1 converges to pointwise.",
"Discretizing the network in this way lets us analyze it as an automaton. We can also view this discretization as a way of bounding the precision that each unit in the network can encode, since it is forced to act as a discrete unit instead of a continuous value. This prevents complex fractal representations that rely on infinite precision. We will see later that, for every architecture considered, this definition ensures that the value of every unit in the network is representable in INLINEFORM0 bits on sequences of length INLINEFORM1 .",
"It is important to note that real neural networks can learn strategies not allowed by the asymptotic definition. Thus, this way of analyzing neural networks is not completely faithful to their practical usage. In section:experiments, we discuss empirical studies investigating how trained networks compare to the asymptotic predictions. While we find evidence of networks learning behavior that is not asymptotically stable, adding noise to the network during training seems to make it more difficult for the network to learn non-asymptotic strategies.",
"Consider a neural network that asymptotically accepts some language. For any given length, we can pick weights for the network such that it will correctly decide strings shorter than that length (thm:arbitraryapproximation).",
"Analyzing a network's asymptotic behavior also gives us a notion of the network's memory. BIBREF1 illustrate how the LSTM's additive cell update gives it more effective memory than the squashed state of an SRN or GRU for solving counting tasks. We generalize this concept of memory capacity as state complexity. Informally, the state complexity of a node within a network represents the number of values that the node can achieve asymptotically as a function of the sequence length INLINEFORM0 . For example, the LSTM cell state will have INLINEFORM1 state complexity (thm:lstmmemorybound), whereas the state of other recurrent networks has INLINEFORM2 (thm:SRNmemorybound).",
"State complexity applies to a hidden state sequence, which we can define as follows:",
"[Hidden state] For any sentence INLINEFORM0 , let INLINEFORM1 be the length of INLINEFORM2 . For INLINEFORM3 , the INLINEFORM4 -length hidden state INLINEFORM5 with respect to parameters INLINEFORM6 is a sequence of functions given by INLINEFORM7 ",
"Often, a sequence acceptor can be written as a function of an intermediate hidden state. For example, the output of the recurrent layer acts as a hidden state in an LSTM language acceptor. In recurrent architectures, the value of the hidden state is a function of the preceding prefix of characters, but with convolution or attention, it can depend on characters occurring after index INLINEFORM0 .",
"The state complexity is defined as the cardinality of the configuration set of such a hidden state:",
"[Configuration set] For all INLINEFORM0 , the configuration set of hidden state INLINEFORM1 with respect to parameters INLINEFORM2 is given by INLINEFORM3 ",
"where INLINEFORM0 is the length, or height, of the sentence matrix INLINEFORM1 .",
"[Fixed state complexity] For all INLINEFORM0 , the fixed state complexity of hidden state INLINEFORM1 with respect to parameters INLINEFORM2 is given by INLINEFORM3 ",
"[General state complexity] For all INLINEFORM0 , the general state complexity of hidden state INLINEFORM1 is given by INLINEFORM2 ",
"To illustrate these definitions, consider a simplified recurrent mechanism based on the LSTM cell. The architecture is parameterized by a vector INLINEFORM0 . At each time step, the network reads a bit INLINEFORM1 and computes ft = (1 xt)",
"it = (2 xt)",
"ht = ft ht-1 + it .",
"When we set INLINEFORM0 , INLINEFORM1 asymptotically computes the sum of the preceding inputs. Because this sum can evaluate to any integer between 0 and INLINEFORM2 , INLINEFORM3 has a fixed state complexity of DISPLAYFORM0 ",
"However, when we use parameters INLINEFORM0 , we get a reduced network where INLINEFORM1 asymptotically. Thus, DISPLAYFORM0 ",
"Finally, the general state complexity is the maximum fixed complexity, which is INLINEFORM0 .",
"For any neural network hidden state, the state complexity is at most INLINEFORM0 (thm:generalstatecomplexity). This means that the value of the hidden unit can be encoded in INLINEFORM1 bits. Moreover, for every specific architecture considered, we observe that each fixed-length state vector has at most INLINEFORM2 state complexity, or, equivalently, can be represented in INLINEFORM3 bits.",
"Architectures that have exponential state complexity, such as the transformer, do so by using a variable-length hidden state. State complexity generalizes naturally to a variable-length hidden state, with the only difference being that INLINEFORM0 def:hiddenstate becomes a sequence of variably sized objects rather than a sequence of fixed-length vectors.",
"Now, we consider what classes of languages different neural networks can accept asymptotically. We also analyze different architectures in terms of state complexity. The theory that emerges from these tools enables better understanding of the computational processes underlying neural sequence models."
],
[
"As previously mentioned, RNNs are Turing-complete under an unconstrained definition of acceptance BIBREF3 . The classical reduction of a Turing machine to an RNN relies on two unrealistic assumptions about RNN computation BIBREF1 . First, the number of recurrent computations must be unbounded in the length of the input, whereas, in practice, RNNs are almost always trained in a real-time fashion. Second, it relies heavily on infinite precision of the network's logits. We will see that the asymptotic analysis, which restricts computation to be real-time and have bounded precision, severely narrows the class of formal languages that an RNN can accept."
],
[
"The SRN, or Elman network, is the simplest type of RNN BIBREF4 :",
"[SRN layer] DISPLAYFORM0 ",
"A well-known problem with SRNs is that they struggle with long-distance dependencies. One explanation of this is the vanishing gradient problem, which motivated the development of more sophisticated architectures like the LSTM BIBREF5 . Another shortcoming of the SRN is that, in some sense, it has less memory than the LSTM. This is because, while both architectures have a fixed number of hidden units, the SRN units remain between INLINEFORM0 and 1, whereas the value of each LSTM cell can grow unboundedly BIBREF1 . We can formalize this intuition by showing that the SRN has finite state complexity:",
"[SRN state complexity] For any length INLINEFORM0 , the SRN cell state INLINEFORM1 has state complexity INLINEFORM2 ",
"For every INLINEFORM0 , each unit of INLINEFORM1 will be the output of a INLINEFORM2 . In the limit, it can achieve either INLINEFORM3 or 1. Thus, for the full vector, the number of configurations is bounded by INLINEFORM4 .",
"It also follows from thm:SRNmemorybound that the languages asymptotically acceptable by an SRN are a subset of the finite-state (i.e. regular) languages. thm:srnlowerbound provides the other direction of this containment. Thus, SRNs are equivalent to finite-state automata.",
"[SRN characterization] Let INLINEFORM0 denote the languages acceptable by an SRN, and INLINEFORM1 the regular languages. Then, INLINEFORM2 ",
"This characterization is quite diminished compared to Turing completeness. It is also more descriptive of what SRNs can express in practice. We will see that LSTMs, on the other hand, are strictly more powerful than the regular languages."
],
[
"An LSTM is a recurrent network with a complex gating mechanism that determines how information from one time step is passed to the next. Originally, this gating mechanism was designed to remedy the vanishing gradient problem in SRNs, or, equivalently, to make it easier for the network to remember long-term dependencies BIBREF5 . Due to strong empirical performance on many language tasks, LSTMs have become a canonical model for NLP.",
" BIBREF1 suggest that another advantage of the LSTM architecture is that it can use its cell state as counter memory. They point out that this constitutes a real difference between the LSTM and the GRU, whose update equations do not allow it to increment or decrement its memory units. We will further investigate this connection between LSTMs and counter machines.",
"[LSTM layer] ft = (Wf xt + Uf ht-1 + bf)",
"it = (Wi xt + Ui ht-1 + bi)",
"ot = (Wo xt + Uo ht-1 + bo)",
"ct = (Wc xt + Uc ht-1 + bc)",
"ct = ft ct-1 + it ct",
"ht = ot f(ct) .",
"In ( SECREF9 ), we set INLINEFORM0 to either the identity or INLINEFORM1 BIBREF1 , although INLINEFORM2 is more standard in practice. The vector INLINEFORM3 is the output that is received by the next layer, and INLINEFORM4 is an unexposed memory vector called the cell state.",
"[LSTM state complexity] The LSTM cell state INLINEFORM0 has state complexity INLINEFORM1 ",
"At each time step INLINEFORM0 , we know that the configuration sets of INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are each subsets of INLINEFORM4 . Similarly, the configuration set of INLINEFORM5 is a subset of INLINEFORM6 . This allows us to rewrite the elementwise recurrent update as [ct]i = [ft]i [ct-1]i + [it]i [ct]i",
"= a [ct-1]i + b",
"where INLINEFORM0 and INLINEFORM1 .",
"Let INLINEFORM0 be the configuration set of INLINEFORM1 . At each time step, we have exactly two ways to produce a new value in INLINEFORM2 that was not in INLINEFORM3 : either we decrement the minimum value in INLINEFORM4 or increment the maximum value. It follows that |St| = 2 + |St-1|",
"|Sn| = O(n) .",
"For all INLINEFORM0 units of the cell state, we get DISPLAYFORM0 ",
"The construction in thm:lstmmemorybound produces a counter machine whose counter and state update functions are linearly separable. Thus, we have an upper bound on the expressive power of the LSTM:",
"[LSTM upper bound] Let INLINEFORM0 be the real-time counter languages BIBREF6 , BIBREF7 . Then, INLINEFORM1 ",
"thm:lstmupperbound constitutes a very tight upper bound on the expressiveness of LSTM computation. Asymptotically, LSTMs are not powerful enough to model even the deterministic context-free language INLINEFORM0 .",
" BIBREF1 show how the LSTM can simulate a simplified variant of the counter machine. Combining these results, we see that the asymptotic expressiveness of the LSTM falls somewhere between the general and simplified counter languages. This suggests counting is a good way to understand the behavior of LSTMs."
],
[
"The GRU is a popular gated recurrent architecture that is in many ways similar to the LSTM BIBREF8 . Rather than having separate forget and input gates, the GRU utilizes a single gate that controls both functions.",
"[GRU layer] zt = (Wz xt + Uz ht-1 + bz)",
"rt = (Wr xt + Ur ht-1 + br)",
"ut = ( Wu xt + Uu(rt ht-1) + bu )",
"ht = zt ht-1 + (1 - zt) ut .",
" BIBREF1 observe that GRUs do not exhibit the same counter behavior as LSTMs on languages like INLINEFORM0 . As with the SRN, the GRU state is squashed between INLINEFORM1 and 1 ( SECREF11 ). Taken together, Lemmas SECREF10 and SECREF10 show that GRUs, like SRNs, are finite-state.",
"[GRU characterization] INLINEFORM0 "
],
[
"Synthesizing all of these results, we get the following complexity hierarchy: = L() = L()",
"L() .",
"Basic recurrent architectures have finite state, whereas the LSTM is strictly more powerful than a finite-state machine."
],
[
"Attention is a popular enhancement to sequence-to-sequence (seq2seq) neural networks BIBREF9 , BIBREF10 , BIBREF11 . Attention allows a network to recall specific encoder states while trying to produce output. In the context of machine translation, this mechanism models the alignment between words in the source and target languages. More recent work has found that “attention is all you need” BIBREF12 , BIBREF13 . In other words, networks with only attention and no recurrent connections perform at the state of the art on many tasks.",
"An attention function maps a query vector and a sequence of paired key-value vectors to a weighted combination of the values. This lookup function is meant to retrieve the values whose keys resemble the query.",
"[Dot-product attention] For any INLINEFORM0 , define a query vector INLINEFORM1 , matrix of key vectors INLINEFORM2 , and matrix of value vectors INLINEFORM3 . Dot-product attention is given by INLINEFORM4 ",
"In def:attention, INLINEFORM0 creates a vector of similarity scores between the query INLINEFORM1 and the key vectors in INLINEFORM2 . The output of attention is thus a weighted sum of the value vectors where the weight for each value represents its relevance.",
"In practice, the dot product INLINEFORM0 is often scaled by the square root of the length of the query vector BIBREF12 . However, this is only done to improve optimization and has no effect on expressiveness. Therefore, we consider the unscaled version.",
"In the asymptotic case, attention reduces to a weighted average of the values whose keys maximally resemble the query. This can be viewed as an INLINEFORM0 operation.",
"[Asymptotic attention] Let INLINEFORM0 be the subsequence of time steps that maximize INLINEFORM1 . Asymptotically, attention computes INLINEFORM2 ",
" [Asymptotic attention with unique maximum] If INLINEFORM0 has a unique maximum over INLINEFORM1 , then attention asymptotically computes INLINEFORM2 ",
"Now, we analyze the effect of adding attention to an acceptor network. Because we are concerned with language acceptance instead of transduction, we consider a simplified seq2seq attention model where the output sequence has length 1:",
"[Attention layer] Let the hidden state INLINEFORM0 be the output of an encoder network where the union of the asymptotic configuration sets over all INLINEFORM1 is finite. We attend over INLINEFORM2 , the matrix stacking INLINEFORM3 , by computing INLINEFORM4 ",
"In this model, INLINEFORM0 represents a summary of the relevant information in the prefix INLINEFORM1 . The query that is used to attend at time INLINEFORM2 is a simple linear transformation of INLINEFORM3 .",
"In addition to modeling alignment, attention improves a bounded-state model by providing additional memory. By converting the state of the network to a growing sequence INLINEFORM0 instead of a fixed length vector INLINEFORM1 , attention enables INLINEFORM2 state complexity.",
"[Encoder state complexity] The full state of the attention layer has state complexity INLINEFORM0 ",
"The INLINEFORM0 complexity of the LSTM architecture means that it is impossible for LSTMs to copy or reverse long strings. The exponential state complexity provided by attention enables copying, which we can view as a simplified version of machine translation. Thus, it makes sense that attention is almost universal in machine translation architectures. The additional memory introduced by attention might also allow more complex hierarchical representations.",
"A natural follow-up question to thm:attentionstatecomplexity is whether this additional complexity is preserved in the attention summary vector INLINEFORM0 . Attending over INLINEFORM1 does not preserve exponential state complexity. Instead, we get an INLINEFORM2 summary of INLINEFORM3 .",
"[Summary state complexity] The attention summary vector has state complexity INLINEFORM0 ",
"With minimal additional assumptions, we can show a more restrictive bound: namely, that the complexity of the summary vector is finite. sec:attentionresults discusses this in more detail."
],
[
"While CNNs were originally developed for image processing BIBREF14 , they are also used to encode sequences. One popular application of this is to build character-level representations of words BIBREF15 . Another example is the capsule network architecture of BIBREF16 , which uses a convolutional layer as an initial feature extractor over a sentence.",
"[CNN acceptor] ht = ( Wh (xt-k .. xt+k) + bh )",
"h+ = maxpool(H)",
"p = (Wa h+ + ba) .",
"In this network, the INLINEFORM0 -convolutional layer ( SECREF5 ) produces a vector-valued sequence of outputs. This sequence is then collapsed to a fixed length by taking the maximum value of each filter over all the time steps ( SECREF5 ).",
"The CNN acceptor is much weaker than the LSTM. Since the vector INLINEFORM0 has finite state, we see that INLINEFORM1 . Moreover, simple regular languages like INLINEFORM2 are beyond the CNN thm:cnncounterexample. Thus, the subset relation is strict.",
"[CNN upper bound] INLINEFORM0 ",
"So, to arrive at a characterization of CNNs, we should move to subregular languages. In particular, we consider the strictly local languages BIBREF17 .",
"[CNN lower bound] Let INLINEFORM0 be the strictly local languages. Then, INLINEFORM1 ",
"Notably, strictly local formalisms have been proposed as a computational model for phonological grammar BIBREF18 . We might take this to explain why CNNs have been successful at modeling character-level information.",
"However, BIBREF18 suggest that a generalization to the tier-based strictly local languages is necessary to account for the full range of phonological phenomena. Tier-based strictly local grammars can target characters in a specific tier of the vocabulary (e.g. vowels) instead of applying to the full string. While a single convolutional layer cannot utilize tiers, it is conceivable that a more complex architecture with recurrent connections could."
],
[
"In this section, we compare our theoretical characterizations for asymptotic networks to the empirical performance of trained neural networks with continuous logits."
],
[
"The goal of this experiment is to evaluate which architectures have memory beyond finite state. We train a language model on INLINEFORM0 with INLINEFORM1 and test it on longer strings INLINEFORM2 . Predicting the INLINEFORM3 character correctly while maintaining good overall accuracy requires INLINEFORM4 states. The results reported in fig:countingresults demonstrate that all recurrent models, with only two hidden units, find a solution to this task that generalizes at least over this range of string lengths.",
" BIBREF1 report failures in attempts to train SRNs and GRUs to accept counter languages, unlike what we have found. We conjecture that this stems not from the requisite memory, but instead from the different objective function we used. Our language modeling training objective is a robust and transferable learning target BIBREF19 , whereas sparse acceptance classification might be challenging to learn directly for long strings.",
" BIBREF1 also observe that LSTMs use their memory as counters in a straightforwardly interpretable manner, whereas SRNs and GRUs do not do so in any obvious way. Despite this, our results show that SRNs and GRUs are nonetheless able to implement generalizable counter memory while processing strings of significant length. Because the strategies learned by these architectures are not asymptotically stable, however, their schemes for encoding counting are less interpretable."
],
[
"In order to abstract away from asymptotically unstable representations, our next experiment investigates how adding noise to an RNN's activations impacts its ability to count. For the SRN and GRU, noise is added to INLINEFORM0 before computing INLINEFORM1 , and for the LSTM, noise is added to INLINEFORM2 . In either case, the noise is sampled from the distribution INLINEFORM3 .",
"The results reported in the right column of fig:countingresults show that the noisy SRN and GRU now fail to count, whereas the noisy LSTM remains successful. Thus, the asymptotic characterization of each architecture matches the capacity of a trained network when a small amount of noise is introduced.",
"From a practical perspective, training neural networks with Gaussian noise is one way of improving generalization by preventing overfitting BIBREF20 , BIBREF21 . From this point of view, asymptotic characterizations might be more descriptive of the generalization capacities of regularized neural networks of the sort necessary to learn the patterns in natural language data as opposed to the unregularized networks that are typically used to learn the patterns in carefully curated formal languages."
],
[
"Another important formal language task for assessing network memory is string reversal. Reversing requires remembering a INLINEFORM0 prefix of characters, which implies INLINEFORM1 state complexity.",
"We frame reversing as a seq2seq transduction task, and compare the performance of an LSTM encoder-decoder architecture to the same architecture augmented with attention. We also report the results of BIBREF22 for a stack neural network (StackNN), another architecture with INLINEFORM0 state complexity (thm:stackstatecomplexity).",
"Following BIBREF22 , the models were trained on 800 random binary strings with length INLINEFORM0 and evaluated on strings with length INLINEFORM1 . As can be seen in table:extremereverse, the LSTM with attention achieves 100.0% validation accuracy, but fails to generalize to longer strings. In contrast, BIBREF22 report that a stack neural network can learn and generalize string reversal flawlessly. In both cases, it seems that having INLINEFORM2 state complexity enables better performance on this memory-demanding task. However, our seq2seq LSTMs appear to be biased against finding a strategy that generalizes to longer strings."
],
[
"We have introduced asymptotic acceptance as a new way to characterize neural networks as automata of different sorts. It provides a useful and generalizable tool for building intuition about how a network works, as well as for comparing the formal properties of different architectures. Further, by combining asymptotic characterizations with existing results in mathematical linguistics, we can better assess the suitability of different architectures for the representation of natural language grammar.",
"We observe empirically, however, that this discrete analysis fails to fully characterize the range of behaviors expressible by neural networks. In particular, RNNs predicted to be finite-state solve a task that requires more than finite memory. On the other hand, introducing a small amount of noise into a network's activations seems to prevent it from implementing non-asymptotic strategies. Thus, asymptotic characterizations might be a good model for the types of generalizable strategies that noise-regularized neural networks trained on natural language data can learn."
],
[
"Thank you to Dana Angluin and Robert Frank for their insightful advice and support on this project."
],
[
"[Arbitary approximation] Let INLINEFORM0 be a neural sequence acceptor for INLINEFORM1 . For all INLINEFORM2 , there exist parameters INLINEFORM3 such that, for any string INLINEFORM4 with INLINEFORM5 , INLINEFORM6 ",
"where INLINEFORM0 rounds to the nearest integer.",
"Consider a string INLINEFORM0 . By the definition of asymptotic acceptance, there exists some number INLINEFORM1 which is the smallest number such that, for all INLINEFORM2 , N(X) - 1L(X) < 12",
" N(X) = 1L(X) . Now, let INLINEFORM0 be the set of sentences INLINEFORM1 with length less than INLINEFORM2 . Since INLINEFORM3 is finite, we pick INLINEFORM4 just by taking DISPLAYFORM0 ",
"[General bound on state complexity] Let INLINEFORM0 be a neural network hidden state. For any length INLINEFORM1 , it holds that INLINEFORM2 ",
"The number of configurations of INLINEFORM0 cannot be more than the number of distinct inputs to the network. By construction, each INLINEFORM1 is a one-hot vector over the alphabet INLINEFORM2 . Thus, the state complexity is bounded according to INLINEFORM3 "
],
[
"[SRN lower bound] INLINEFORM0 ",
"We must show that any language acceptable by a finite-state machine is SRN-acceptable. We need to asymptotically compute a representation of the machine's state in INLINEFORM0 . We do this by storing all values of the following finite predicate at each time step: DISPLAYFORM0 ",
"where INLINEFORM0 is true if the machine is in state INLINEFORM1 at time INLINEFORM2 .",
"Let INLINEFORM0 be the set of accepting states for the machine, and let INLINEFORM1 be the inverse transition relation. Assuming INLINEFORM2 asymptotically computes INLINEFORM3 , we can decide to accept or reject in the final layer according to the linearly separable disjunction DISPLAYFORM0 ",
"We now show how to recurrently compute INLINEFORM0 at each time step. By rewriting INLINEFORM1 in terms of the previous INLINEFORM2 values, we get the following recurrence: DISPLAYFORM0 ",
"Since this formula is linearly separable, we can compute it in a single neural network layer from INLINEFORM0 and INLINEFORM1 .",
"Finally, we consider the base case. We need to ensure that transitions out of the initial state work out correctly at the first time step. We do this by adding a new memory unit INLINEFORM0 to INLINEFORM1 which is always rewritten to have value 1. Thus, if INLINEFORM2 , we can be sure we are in the initial time step. For each transition out of the initial state, we add INLINEFORM3 as an additional term to get DISPLAYFORM0 ",
"This equation is still linearly separable and guarantees that the initial step will be computed correctly."
],
[
"These results follow similar arguments to those in section:srns and sec:srnproofs.",
"[GRU state complexity] The GRU hidden state has state complexity INLINEFORM0 ",
"The configuration set of INLINEFORM0 is a subset of INLINEFORM1 . Thus, we have two possibilities for each value of INLINEFORM2 : either INLINEFORM3 or INLINEFORM4 . Furthermore, the configuration set of INLINEFORM5 is a subset of INLINEFORM6 . Let INLINEFORM7 be the configuration set of INLINEFORM8 . We can describe INLINEFORM9 according to S0 = { 0 }",
"St St-1 {-1, 1} .",
"This implies that, at most, there are only three possible values for each logit: INLINEFORM0 , 0, or 1. Thus, the state complexity of INLINEFORM1 is DISPLAYFORM0 ",
"[GRU lower bound] INLINEFORM0 ",
"We can simulate a finite-state machine using the INLINEFORM0 construction from thm:srnreduction. We compute values for the following predicate at each time step: DISPLAYFORM0 ",
"Since ( EQREF27 ) is linearly separable, we can store INLINEFORM0 in our hidden state INLINEFORM1 and recurrently compute its update. The base case can be handled similarly to ( EQREF25 ). A final feedforward layer accepts or rejects according to ( EQREF23 )."
],
[
"[thm:asymptoticattention restated] Let INLINEFORM0 be the subsequence of time steps that maximize INLINEFORM1 . Asymptotically, attention computes INLINEFORM2 ",
"Observe that, asymptotically, INLINEFORM0 approaches a function DISPLAYFORM0 ",
"Thus, the output of the attention mechanism reduces to the sum DISPLAYFORM0 ",
"[thm:attentionstatecomplexity restated] The full state of the attention layer has state complexity INLINEFORM0 ",
"By the general upper bound on state complexity thm:generalstatecomplexity, we know that INLINEFORM0 . We now show the lower bound.",
"We pick weights INLINEFORM0 in the encoder such that INLINEFORM1 . Thus, INLINEFORM2 for all INLINEFORM3 . Since the values at each time step are independent, we know that (Vn) = n",
"(Vn) = 2(n) .",
"[thm:summarycomplexity restated] The attention summary vector has state complexity INLINEFORM0 ",
"By thm:asymptoticattention, we know that DISPLAYFORM0 ",
"By construction, there is a finite set INLINEFORM0 containing all possible configurations of every INLINEFORM1 . We bound the number of configurations for each INLINEFORM2 by INLINEFORM3 to get DISPLAYFORM0 ",
"[Attention state complexity lower bound] The attention summary vector has state complexity INLINEFORM0 ",
"Consider the case where keys and values have dimension 1. Further, let the input strings come from a binary alphabet INLINEFORM0 . We pick parameters INLINEFORM1 in the encoder such that, for all INLINEFORM2 , DISPLAYFORM0 ",
"and INLINEFORM0 . Then, attention returns DISPLAYFORM0 ",
"where INLINEFORM0 is the number of INLINEFORM1 such that INLINEFORM2 . We can vary the input to produce INLINEFORM3 from 1 to INLINEFORM4 . Thus, we have (hn) = n",
"(hn) = (n) .",
"[Attention state complexity with unique maximum] If, for all INLINEFORM0 , there exists a unique INLINEFORM1 such that INLINEFORM2 , then INLINEFORM3 ",
"If INLINEFORM0 has a unique maximum, then by cor:injectiveattention attention returns DISPLAYFORM0 ",
"By construction, there is a finite set INLINEFORM0 which is a superset of the configuration set of INLINEFORM1 . Thus, DISPLAYFORM0 ",
"[Attention state complexity with ReLU activations] If INLINEFORM0 for INLINEFORM1 , then INLINEFORM2 ",
"By thm:asymptoticattention, we know that attention computes DISPLAYFORM0 ",
"This sum evaluates to a vector in INLINEFORM0 , which means that DISPLAYFORM0 ",
"thm:attentioninfinitevalues applies if the sequence INLINEFORM0 is computed as the output of INLINEFORM1 . A similar result holds if it is computed as the output of an unsquashed linear transformation."
],
[
"[CNN counterexample] INLINEFORM0 ",
"By contradiction. Assume we can write a network with window size INLINEFORM0 that accepts any string with exactly one INLINEFORM1 and reject any other string. Consider a string with two INLINEFORM2 s at indices INLINEFORM3 and INLINEFORM4 where INLINEFORM5 . Then, no column in the network receives both INLINEFORM6 and INLINEFORM7 as input. When we replace one INLINEFORM8 with an INLINEFORM9 , the value of INLINEFORM10 remains the same. Since the value of INLINEFORM11 ( SECREF5 ) fully determines acceptance, the network does not accept this new string. However, the string now contains exactly one INLINEFORM12 , so we reach a contradiction.",
"[Strictly INLINEFORM0 -local grammar] A strictly INLINEFORM1 -local grammar over an alphabet INLINEFORM2 is a set of allowable INLINEFORM3 -grams INLINEFORM4 . Each INLINEFORM5 takes the form INLINEFORM6 ",
"where INLINEFORM0 is a padding symbol for the start and end of sentences.",
"[Strictly local acceptance] A strictly INLINEFORM0 -local grammar INLINEFORM1 accepts a string INLINEFORM2 if, at each index INLINEFORM3 , INLINEFORM4 ",
"[Implies thm:convstrictlylocal] A INLINEFORM0 -CNN can asymptotically accept any strictly INLINEFORM1 -local language.",
"We construct a INLINEFORM0 -CNN to simulate a strictly INLINEFORM1 -local grammar. In the convolutional layer ( SECREF5 ), each filter identifies whether a particular invalid INLINEFORM2 -gram is matched. This condition is a conjunction of one-hot terms, so we use INLINEFORM3 to construct a linear transformation that comes out to 1 if a particular invalid sequence is matched, and INLINEFORM4 otherwise.",
"Next, the pooling layer ( SECREF5 ) collapses the filter values at each time step. A pooled filter will be 1 if the invalid sequence it detects was matched somewhere and INLINEFORM0 otherwise.",
"Finally, we decide acceptance ( SECREF5 ) by verifying that no invalid pattern was detected. To do this, we assign each filter a weight of INLINEFORM0 use a threshold of INLINEFORM1 where INLINEFORM2 is the number of invalid patterns. If any filter has value 1, then this sum will be negative. Otherwise, it will be INLINEFORM3 . Thus, asymptotic sigmoid will give us a correct acceptance decision."
],
[
"Refer to BIBREF22 for a definition of the StackNN architecture. The architecture utilizes a differentiable data structure called a neural stack. We show that this data structure has INLINEFORM0 state complexity.",
"[Neural stack state complexity] Let INLINEFORM0 be a neural stack with a feedforward controller. Then, INLINEFORM1 ",
"By the general state complexity bound thm:generalstatecomplexity, we know that INLINEFORM0 . We now show the lower bound.",
"The stack at time step INLINEFORM0 is a matrix INLINEFORM1 where the rows correspond to vectors that have been pushed during the previous time steps. We set the weights of the controller INLINEFORM2 such that, at each step, we pop with strength 0 and push INLINEFORM3 with strength 1. Then, we have (Sn) = n",
"(Sn) = 2(n) ."
]
],
"section_name": [
"Introduction",
"Introducing the Asymptotic Analysis",
"Recurrent Neural Networks",
"Simple Recurrent Networks",
"Long Short-Term Memory Networks",
"Gated Recurrent Units",
"RNN Complexity Hierarchy",
"Attention",
"Convolutional Networks",
"Empirical Results",
"Counting",
"Counting with Noise",
"Reversing",
"Conclusion",
"Acknowledgements",
"Asymptotic Acceptance and State Complexity",
"SRN Lemmas",
"GRU Lemmas",
"Attention Lemmas",
"CNN Lemmas",
"Neural Stack Lemmas"
]
} | {
"answers": [
{
"annotation_id": [
"10d1ae0dadf4a3f406a790dbcdb56ae89538b98f",
"b4b959b2370ec382f7a87f412b93afbed70d60c1"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"The INLINEFORM0 complexity of the LSTM architecture means that it is impossible for LSTMs to copy or reverse long strings. The exponential state complexity provided by attention enables copying, which we can view as a simplified version of machine translation. Thus, it makes sense that attention is almost universal in machine translation architectures. The additional memory introduced by attention might also allow more complex hierarchical representations.",
"[SRN characterization] Let INLINEFORM0 denote the languages acceptable by an SRN, and INLINEFORM1 the regular languages. Then, INLINEFORM2",
"So, to arrive at a characterization of CNNs, we should move to subregular languages. In particular, we consider the strictly local languages BIBREF17 ."
],
"extractive_spans": [],
"free_form_answer": "Attention neural networks can represent more languages than other networks. Simple recurring networks can describe regular languages. CNNs can describe only strictly local languages. ",
"highlighted_evidence": [
"The exponential state complexity provided by attention enables copying, which we can view as a simplified version of machine translation. Thus, it makes sense that attention is almost universal in machine translation architectures. The additional memory introduced by attention might also allow more complex hierarchical representations.",
"[SRN characterization] Let INLINEFORM0 denote the languages acceptable by an SRN, and INLINEFORM1 the regular languages. Then, INLINEFORM2",
"So, to arrive at a characterization of CNNs, we should move to subregular languages. In particular, we consider the strictly local languages BIBREF17 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"a5417bda35fc68d9b5306b64efe0a125818c76a0",
"ea215bcc8eb279cf5ec3e2365f7a8003149c44e5"
],
"answer": [
{
"evidence": [
"BIBREF1 show how the LSTM can simulate a simplified variant of the counter machine. Combining these results, we see that the asymptotic expressiveness of the LSTM falls somewhere between the general and simplified counter languages. This suggests counting is a good way to understand the behavior of LSTMs.",
"Another important formal language task for assessing network memory is string reversal. Reversing requires remembering a INLINEFORM0 prefix of characters, which implies INLINEFORM1 state complexity.",
"We frame reversing as a seq2seq transduction task, and compare the performance of an LSTM encoder-decoder architecture to the same architecture augmented with attention. We also report the results of BIBREF22 for a stack neural network (StackNN), another architecture with INLINEFORM0 state complexity (thm:stackstatecomplexity).",
"Counting",
"The goal of this experiment is to evaluate which architectures have memory beyond finite state. We train a language model on INLINEFORM0 with INLINEFORM1 and test it on longer strings INLINEFORM2 . Predicting the INLINEFORM3 character correctly while maintaining good overall accuracy requires INLINEFORM4 states. The results reported in fig:countingresults demonstrate that all recurrent models, with only two hidden units, find a solution to this task that generalizes at least over this range of string lengths.",
"Counting with Noise",
"In order to abstract away from asymptotically unstable representations, our next experiment investigates how adding noise to an RNN's activations impacts its ability to count. For the SRN and GRU, noise is added to INLINEFORM0 before computing INLINEFORM1 , and for the LSTM, noise is added to INLINEFORM2 . In either case, the noise is sampled from the distribution INLINEFORM3 .",
"Reversing"
],
"extractive_spans": [
"Counting",
"Counting with Noise",
"Reversing"
],
"free_form_answer": "",
"highlighted_evidence": [
"Combining these results, we see that the asymptotic expressiveness of the LSTM falls somewhere between the general and simplified counter languages. This suggests counting is a good way to understand the behavior of LSTMs.",
"Another important formal language task for assessing network memory is string reversal. Reversing requires remembering a INLINEFORM0 prefix of characters, which implies INLINEFORM1 state complexity.\n\nWe frame reversing as a seq2seq transduction task, and compare the performance of an LSTM encoder-decoder architecture to the same architecture augmented with attention",
"Counting\nThe goal of this experiment is to evaluate which architectures have memory beyond finite state. We train a language model on INLINEFORM0 with INLINEFORM1 and test it on longer strings INLINEFORM2 . Predicting the INLINEFORM3 character correctly while maintaining good overall accuracy requires INLINEFORM4 states.",
"Counting with Noise\nIn order to abstract away from asymptotically unstable representations, our next experiment investigates how adding noise to an RNN's activations impacts its ability to count.",
"Reversing\nAnother important formal language task for assessing network memory is string reversal."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The goal of this experiment is to evaluate which architectures have memory beyond finite state. We train a language model on INLINEFORM0 with INLINEFORM1 and test it on longer strings INLINEFORM2 . Predicting the INLINEFORM3 character correctly while maintaining good overall accuracy requires INLINEFORM4 states. The results reported in fig:countingresults demonstrate that all recurrent models, with only two hidden units, find a solution to this task that generalizes at least over this range of string lengths.",
"BIBREF1 report failures in attempts to train SRNs and GRUs to accept counter languages, unlike what we have found. We conjecture that this stems not from the requisite memory, but instead from the different objective function we used. Our language modeling training objective is a robust and transferable learning target BIBREF19 , whereas sparse acceptance classification might be challenging to learn directly for long strings."
],
"extractive_spans": [
"counter languages"
],
"free_form_answer": "",
"highlighted_evidence": [
"The results reported in fig:countingresults demonstrate that all recurrent models, with only two hidden units, find a solution to this task that generalizes at least over this range of string lengths.\n\nBIBREF1 report failures in attempts to train SRNs and GRUs to accept counter languages, unlike what we have found."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"How do attention, recurrent and convolutional networks differ on the language classes they accept?",
"What type of languages do they test LSTMs on?"
],
"question_id": [
"b0e894536857cb249bd75188c3ca5a04e49ff0b6",
"94c22f72665dfac3e6e72e40f2ffbc8c99bf849c"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: With sigmoid activations, the network on the left accepts a sequence of bits if and only if xt = 1 for some t. On the right is the discrete computation graph that the network approaches asymptotically.",
"Table 1: Generalization performance of language models trained on anbnc. Each model has 2 hidden units.",
"Table 2: Max validation and generalization accuracies on string reversal over 10 trials. The top section shows our seq2seq LSTM with and without attention. The bottom reports the LSTM and StackNN results of Hao et al. (2018). Each LSTM has 10 hidden units."
],
"file": [
"2-Figure1-1.png",
"8-Table1-1.png",
"8-Table2-1.png"
]
} | [
"How do attention, recurrent and convolutional networks differ on the language classes they accept?"
] | [
[
"1906.01615-Attention-13",
"1906.01615-Convolutional Networks-7"
]
] | [
"Attention neural networks can represent more languages than other networks. Simple recurring networks can describe regular languages. CNNs can describe only strictly local languages. "
] | 317 |
1910.10487 | Memory-Augmented Recurrent Networks for Dialogue Coherence | Recent dialogue approaches operate by reading each word in a conversation history, and aggregating accrued dialogue information into a single state. This fixed-size vector is not expandable and must maintain a consistent format over time. Other recent approaches exploit an attention mechanism to extract useful information from past conversational utterances, but this introduces an increased computational complexity. In this work, we explore the use of the Neural Turing Machine (NTM) to provide a more permanent and flexible storage mechanism for maintaining dialogue coherence. Specifically, we introduce two separate dialogue architectures based on this NTM design. The first design features a sequence-to-sequence architecture with two separate NTM modules, one for each participant in the conversation. The second memory architecture incorporates a single NTM module, which stores parallel context information for both speakers. This second design also replaces the sequence-to-sequence architecture with a neural language model, to allow for longer context of the NTM and greater understanding of the dialogue history. We report perplexity performance for both models, and compare them to existing baselines. | {
"paragraphs": [
[
"Recently, chit-chat dialogue models have achieved improved performance in modelling a variety of conversational domains, including movie subtitles, Twitter chats and help forums BIBREF0, BIBREF1, BIBREF2, BIBREF3. These neural systems were used to model conversational dialogue via training on large chit-chat datasets such as the OpenSubtitles corpus, which contains generic dialogue conversations from movies BIBREF4. The datasets used do not have an explicit dialogue state to be modelled BIBREF5, but rather require the agent to learn the nuances of natural language in the context of casual peer-to-peer interaction.",
"Many recent chit-chat systems BIBREF2, BIBREF3 attempt to introduce increased diversity into model responses. However, dialogue systems have also been known to suffer from a lack of coherence BIBREF0. Given an input message history, systems often have difficulty tracking important information such as professions and names BIBREF0. It would be of benefit to create a system which extracts relevant features from the input that indicate which responses would be most appropriate, and conditions on this stored information to select the appropriate response.",
"A major problem with existing recurrent neural network (RNN) architectures is that these systems aggregate all input tokens into a state vector, which is passed to a decoder for generation of the final response, or in the case of a neural probabilistic language model BIBREF6, the state at each time step is used to predict the next token in the sequence. Ideally the size of the state should expand with the number of input tokens and should not lose important information about the input. However, RNN states are typically fixed sized, and for any chosen state size, there exists an input sequence length for which the RNN would not be able to store all relevant details for a final response. In addition, the RNN state undergoes constant transformation at each computational step. This makes it difficult to maintain a persistent storage of information that remains constant over many time steps.",
"The introduction of attention mechanisms BIBREF7 has sparked a change in the current design of RNN architectures. Instead of relying fully on a fixed-sized state vector, an attention mechanism allows each decoder word prediction step to extract relevant information from past states through a key-value query mechanism. However, this mechanism connects every input token with all preceeding ones via a computational step, increasing the complexity of the calculation to $O(N^2)$ for an input sequence size N. In the ideal case, the mapping of input conversation history to output response would have a computational complexity of $O(N)$. For this reason, it is desirable to have an information retrieval system that is both scale-able, but not proportional to input length.",
"We study the impact of accessible memory on response coherence by constructing a memory-augmented dialogue system. The motivation is that it would be beneficial to store details of the conversational history in a more permanent memory structure, instead of being captured inside a fixed-sized RNN hidden state. Our proposed system is able to both read and write to a persistent memory module after reading each input utterance. As such, it has access to a stable representation of the input message history when formulating a final response. We explore two distinct memory architectures with different properties, and compare their differences and benefits. We evaluate our proposed memory systems using perplexity evaluation, and compare them to competitive baselines."
],
[
"vinyals2015neural train a sequence-to-sequence LSTM-based dialogue model on messages from an IT help-desk chat service, as well as the OpenSubtitles corpus, which contains subtitles from popular movies. This model was able to answer philosophical questions and performed well with common sense reasoning. Similarly, serban2016building train a hierarchical LSTM architecture (HRED) on the MovieTriples dataset, which contains examples of the form (utterance #1, utterance #2, utterance #3). However, this dataset is small and does not have conversations of larger length. They show that using a context recurrent neural network (RNN) to read representations at the utterance-level allows for a more top-down perspective on the dialogue history. Finally, serban2017hierarchical build a dialogue system which injects diversity into output responses (VHRED) through the use of a latent variable for variational inference BIBREF3. They argue that the injection of information from the latent variables during inference increases response coherence without degrading response quality. They train the full system on the Twitter Dialogue corpus, which contains generic multi-turn conversations from public Twitter accounts. They also train on the Ubuntu Dialogue Corpus, a collection of multi-turn vocabulary-rich conversations extracted from Ubuntu chat logs. du2018variational adapt from the VHRED architecture by increasing the influence of the latent variables on the output utterance. In this work, a backwards RNN carries information from future timesteps to present ones, such that a backward state contains a summary of all future utterances the model is required to generate. The authors constrain this backward state at each time step to be a latent variable, and minimize the KL loss to restrict information flow. At inference, all backward state latent variables are sampled from and decoded to the output response. The authors interpret the sampling of the latent variables as a \"plan\" of what to generate next.",
"bowman2015generating observe that latent variables can sometimes degrade, where the system chooses not to store information in the variable and does not condition on it when producing the output. bowman2015generating introduce a process called KL-annealing which slowly increases the KL divergence loss component over the course of training. However, BIBREF8 claim that KL annealing is not enough, and introduce utterance dropout to force the model to rely on information stored in the latent variable during response generation. They apply this system to conversational modelling.",
"Other attempts to increase diversity focus on selecting diverse responses after the model is trained. li2015diversity introduce a modification of beam search. Beam search attempts to find the highest probability response to a given input by producing a tree of possible responses and \"pruning\" branches that have the lowest probability. The top K highest probability responses are returned, of which the highest is selected as the output response. li2015diversity observe that beam search tends to select certain families of responses that temporarily have higher probability. To combat this, a discount factor of probabilities is added to responses that come from the same parent response candidate. This encourages selecting responses that are different from one another when searching for the highest probability target.",
"While coherence and diversity remain the primary focus of model dialogue architectures, many have tried to incorporate additional capabilities. zhou2017mojitalk introduce emotion into generated utterances by creating a large-scale fine-grained emotion dialogue dataset that uses tagged emojis to classify utterance sentiment. Then they train a conditional variational autoencoder (CVAE) to generate responses given an input emotion. Along this line of research, li2016persona use Reddit users as a source of persona, and learn individual persona embeddings per user. The system then conditions on these embeddings to generate a response while maintaining coherence specific to the given user. pandey2018exemplar expand the context of an existing dialogue model by extracting input responses from the training set that are most similar to the current input. These \"exemplar\" responses are then conditioned on to use as reference for final response generation. In another attempt to add context, young2018augmenting utilize a relational database to extract specific entity relations that are relevant for the current input. These relations provide more context for the dialogue model and allows it to respond to the user with information it did not observe in the training set.",
"Ideally, NLP models should have the ability to use and update information processed in the past. For dialogue generation, this ability is particularly important, because dialogue involves exchange of information in discourse, and all responses depend on what has been mentioned in the past. RNNs introduce \"memory\" by adding an output of one time step to their input in a future time step. Theoretically, properly trained RNNs are Turing-complete, but in reality vanilla RNNs often do not perform well due to the gradient vanishing problem. Gated RNNs such as LSTM and GRU introduces cell state, which can be understood as memory controlled by trainable logic gates. Gated RNNs do not suffer from the vanishing gradient problem as much, and indeed outperform vanilla RNNs in various NLP tasks. This is likely because the vanilla RNN state vector undergoes a linear transformation at each step, which can be difficult to control. In contrast, gated RNNs typically both control the flow of information, and ensure only elemnt-wise operations occur on the state, which allow gradients to pass more easily. However, they too fail in some basic memorization tasks such as copying and associative recall. A major issue is when the cell state gets updated, previous memories are forever erased. As a result, Gated RNNs can not model long-term dependencies well.",
"In recent years, there have been proposals to use memory neural networks to capture long-term information. A memory module is defined as an external component of the neural network system, and it is theoretically unlimited in capacity. weston2014memory propose a sequence prediction method using a memory with content-based addressing. In their implementation for the bAbI task BIBREF9 for example, their model encodes and sequentially saves words from text in memory slots. When a question about the text is asked, the model uses content-based addressing to retrieve memories relevant to the question, in order to generate answers. They use the k-best memory slots, where k is a relative small number (1 or 2 in their paper). sukhbaatar2015end propose an end-to-end neural network model, which uses content-based addressing to access multiple memory layers. This model has been implemented in a relatively simple goal-oriented dialogue system (restaurant booking) and has decent performance BIBREF10.",
"DBLP:journals/corr/GravesWD14 further develop the addressing mechanism and make old memory slots dynamically update-able. The model read heads access information from all the memory slots at once using soft addressing. The write heads, on the other hand, have the ability to modify memory slots. The content-based addressing serves to locate relevant information from memory, while another location-based addressing is also used, to achieve slot shifting, interpolation of address from the previous step, and so on. As a result, the memory management is much more complex than the previously proposed memory neural networks. This system is known as the Neural Turing Machine (NTM).",
"Other NTM variants have also been proposed recently. DBLP:journals/corr/ZhangYZ15 propose structured memory architectures for NTMs, and argue they could alleviate overfitting and increase predictive accuracy. DBLP:journals/nature/GravesWRHDGCGRA16 propose a memory access mechanism on top of NTM, which they call the Differentiable Neural Computer (DNC). DNC can store the transitions between memory locations it accesses, and thus can model some structured data. DBLP:journals/corr/GulcehreCCB16 proposed a Dynamic Neural Turing Machine (D-NTM) model, which allows more addressing mechanisms, such as multi-step addressing. DBLP:journals/corr/GulcehreCB17 further simplified the algorithm, so a single trainable matrix is used to get locations for read and write. Both models separate the address section from the content section of memory.",
"The Global Context Layer BIBREF11 independently proposes the idea of address-content separation, noting that the content-based addressing in the canonical NTM model is difficult to train. A crucial difference between GCL and these models is that they use input “content” to compute keys. In GCL, the addressing mechanism fully depends on the entity representations, which are provided by the context encoding layers and not computed by the GCL controller. Addressing then involves matching the input entities and the entities in memory. Such an approach is desirable for tasks like event temporal relation classification, entity co-reference and so on. GCL also simplified the location-based addressing proposed in NTM. For example, there is no interpolation between current addressing and previous addressing.",
"Other than NTM-based approaches, there are recent models that use an attention mechanism over either input or external memory. For instance, the Pointer Networks BIBREF12 uses attention over input timesteps. However, it has no power to rewrite information for later use, since they have no “memory” except for the RNN states. The Dynamic Memory Networks BIBREF13 have an “episodic memory” module which can be updated at each timestep. However, the memory is a vector (“episode”) without internal structure, and the attention mechanism only works on inputs, just as in Pointer Networks. The GCL model and other NTM-based models have a memory with multiple slots, and the addressing function dictates writing and reading to/from certain slots in the memory"
],
[
"As a preliminary approach, we implement a dialogue generation system with segment-level memory manipulation. Segment-level memory refers to memory of sub-sentence level, which often corresponds to entity mentions, event mentions, and proper names, etc. We use NTM as the memory module, because it is more or less a default choice before specialized mechanisms are developed. Details of NTMs can be found in DBLP:journals/corr/GravesWD14.",
"As in the baseline model, the encoder and decoder each has an Gated Recurrent Unit (GRU) inside. A GRU is a type of recurrent neural networks that coordinates forgetting and write of information, to make sure they don't both occur simultaneously. This is accomplished via an \"update gate.\" A GRU architecture processes a list of inputs in sequence, and is described by the following equations:",
"For each input $x_t$ and previous state $h_{t-1}$, the GRU produces the next state $h_t$ given learned weights $W_z$, $W_r$ and $W$. $z_t$ denotes the update gate. The encoder GRU in this memory architecture reads a token at each time step, and encodes a context representation $c$ at the end of the input sequence. In addition to that, the memory enhanced model implements two Neural Turing Machines (NTMs). Each of them is for one speaker in the conversation, since the Ubuntu dataset has two speakers in every conversation. Every turn in a dialogue is divided in 4 “segments\". If a turn has 20 tokens, for example, a segment contains 5 tokens. The output of the GRU is written to the NTM at the end of every segment. It does not output anything useful here, but the internal memory is being updated each time. When the dialogue switches to next turn, the current NTM pauses and the other NTM starts to work in the same way. When an NTM pauses, its internal memory retains, so as soon as the dialogue moves to its turn again, it continues to read and update its internal memory.",
"Equation DISPLAY_FORM6 shows how one NTM updates. $T$ denotes the length of one turn, and $s$ is the output of the encoder GRU. $n=1,2,3,4$ represents the four time steps when the NTM updates in one turn of the conversation.",
"The two NTMs can be interpreted as two external memories tracking each speaker's utterances. When one speaker needs to make a response at the end of the conversation, he needs to refer to both speakers' history to make sure the response is coherent with respect to context. This allows for separate tracking of each participant, while also consolidating their representations.",
"The decoder GRU works the same way as the baseline model. Each time it reads a token, from either the true response or the generated response, depending on whether teacher force training is used. This token and the context representation $c$ generated by the encoder GRU are both used as input to the decoder GRU.",
"However, now the two NTMs also participate in token generation. At every time step, output of the decoder GRU is fed into the two NTMs, and outputs of the two NTMs are used together to make predictions.",
"In the equation above, $\\mathrm {FC}$ represents a fully connected layer, and $\\widehat{y_t}$ is the predicted vector.",
"From now on, we refer to this system as the D-NTMS (Dual-NTM Seq2Seq) system."
],
[
"In this section we introduce a somewhat simpler, but more effective memory module architecture. In contrast to the previous D-NTMS architecture, we combine the encoder-decoder architecture of the sequence to sequence GRU into a single language model. This combination entails the model predicting all tokens in the dialogue history in sequence. This change in setup exploits the property that the response is in essence drawn from the same distribution as all previous utterances, and so should not be treated any differently. This language model variant learns to predict all utterances in the dialogue history, and thus treats the response as just another utterance to predict. This setup may also help the model learn the flow of conversation from beginning to end.",
"With a neural language model predicting tokens, it is then necessary to insert reads and writes from a Neural Turing Machine. In this architecture, we only use one NTM. This change is motivated by the possibility that the speaker NTMs from the previous architecture may have difficulty exchanging information, and thus cannot adequately represent each utterance in the context of the previous one. We follow an identical setup as before and split the dialogue history into segments. A GRU processes each segment in sequence. Between each segment, the output GRU state is used to query and write to the NTM module to store and retrieve relevant information about the context history so far. This information is conditioned on for all subsequent tokens in the next segment, in order to exploit this information to make more informed predictions. Lastly, the GRU NTM has an internal LSTM controller which guides the read and writes to and from the memory section. Reads are facilitated via content-based addressing, where a cosine similarity mechanism selects entries that most resemble the query. The Neural Turing Machine utilized can be found as an existing Github implementation.",
"In further investigations, we refer to this model as the NTM-LM system."
],
[
"As a reliable baseline, we will evaluate a vanilla sequence-to-sequence GRU dialogue architecture, with the same hyper-parameters as our chosen model. We refer this this baseline as Seq2Seq. In addition, we report results for a vanilla GRU language model (LM). Finally, we include a more recent baseline, the Hierarchical Encoder-Decoder (HRED) system which is trained for the same number of epochs, same batch size, and with the same encoder and decoder size as the Seq2Seq baseline . As previously mentioned, we refer to our first proposed memory architecture as D-NTMS and to our second memory architecture as NTM-LM."
],
[
"To evaluate the performance of each dialogue baseline against the proposed models, we use the Ubuntu Dialogue Corpus BIBREF14, chosen for its rich vocabulary size, diversity of responses, and dependence of each utterance on previous ones (coherence required). We perform perplexity evaluation using a held-out validation set. The results are reported in Table TABREF3. Perplexity is reported per word. For reference, a randomly-initialized model would receive a perplexity of 50,000 for our chosen vocabulary size. We also report generated examples from the model, shown in Table TABREF15."
],
[
"See Table TABREF3 for details on model and baseline perplexity. To begin, it is worth noting that all of the above architectures were trained in a similar environment, with the exception of HRED, which was trained using an existing Github implementation implementation. Overall, the NTM-LM architecture performed the best of all model architectures, whereas the sequence-to-sequence architecture performed the worst. The proposed NTM-LM outperformed the DNTM-S architecture.",
"After one epoch of training, the perplexity evaluated on the validation set was 68.50 for the proposed memory-augmented NTM-LM architecture. This is a 0.68 perplexity improvement over the vanilla language model without the NTM augmentation."
],
[
"Overall, the HRED baseline was top performing among all tested architectures. This baseline breaks up utterances in a conversation and reads them separately, producing a hierarchical view which likely promotes coherence at a high level.",
"Now we will discuss the memory-augmented D-NTMS architecture. The memory-augmented architecture improved performance above the baseline sequence-to-sequence architecture. As such, it is likely that the memory modules were able to store valuable information about the conversation, and were able to draw on that information during the decoder phase. One drawback of the memory enhanced model is that training was significantly slower. For this reason, model simplification is required in the future to make it more practical. In addition, the NTM has a lot of parameters and some of them may be redundant or damaging. In the DNTM-S system, we may not need to access the NTM at each step of decoding either. Instead, it can be accessed in some intervals of time steps, and the output is used for all steps within the interval.",
"The best performing model was the NTM-LM architecture. While the model received the best performance in perplexity, it demonstrated only a one-point improvement over the existing language model architecture. While in state-of-the-art comparisons a one point difference can be significant, it does indicate that the proposed NTM addition to the language model only contributed a small improvement. It is possible that the additional NTM module was too difficult to train, or that the NTM module injected noise into the input of the GRU such that training became difficult. It is still surprising that the NTM was not put to better use, for performance gains. It is possible the model has not been appropriately tuned.",
"Another consideration of the NTM-LM architecture is that it takes a significant amount of time to train. Similar to the D-NTMS, the NTM memory module requires a sizeable amount of computational steps to both retrieve a query response from available memory slots, and also to write to a new or existing slot using existing write weights. This must be repeated for each segment. Another source of slowdown with regard to computation is the fact that the intermittent NTM reads and writes force the input utterance into segments, as illustrated in Figure FIGREF2. This splitting of token processing steps requires additional overhead to maintain, and it may discourage parallel computation of different GRU input segments simultaneously. This problem is not theoretical, and may be solved using future optimizations of a chosen deep learning framework. For Pytorch, we observed a slowdown for a segmented dialogue history versus a complete history.",
"Of all models, the HRED architecture utilized pre-trained GloVe vectors as an initialization for its input word embedding matrix. This feature likely improved performance of the HRED in comparison to other systems, such as the vanilla sequence-to-sequence. However, in separate experiments, GloVe vectors only managed a 5% coverage of all words in the vocabulary. This low number is likely due to the fact that the Ubuntu Dialogues corpus contains heavy terminology from the Ubuntu operating system and user packages. In addition, the Ubuntu conversations contain a significant amount of typos and grammar errors, further complicating analysis. Context-dependent embeddings such as ElMo BIBREF15 may help alleviate this issue, as character-level RNNs can better deal with typos and detect sub word-level elements such morphemes.",
"Due to time requirements, there were no targeted evaluations of memory coherence other than perplexity, which evaluates overall coherence of the conversation. This form of specific evaluation may be achievable through a synethetic dataset of responses, for example, \"What is your profession? I am a doctor.</s>What do you do for work?</s>I am a doctor.\" This sort of example would require direct storage of the profession of a given speaker. However, the Ubuntu Dialogue corpus contains complicated utterances in a specific domain, and thus does not lend well to synthesized utterances from a simpler conversational domain. In addition, synthetic conversations like the one above do not sound overly natural, as a human speaker does not normally repeat a query for information after they have already asked for it. In that sense, it is difficult to directly evaluate dialogue coherence.",
"Not reported in this paper was a separate implementation of the language model that achieved better results (62 perplexity). While this was the best performing model, it was written in a different environment than the language model reported here or the NTM-LM model. As such, comparing the NTM-LM to this value would be misleading. Since the NTM-LM is an augmentation of the existing LM language model implementation, we report perplexity results from that implementation instead for fair comparison. In that implementation, the addition of the NTM memory model improved performance. For completeness, we report the existence of the outperforming language model here."
],
[
"We establish memory modules as a valid means of storing relevant information for dialogue coherence, and show improved performance when compared to the sequence-to-sequence baseline and vanilla language model. We establish that augmenting these baseline architectures with NTM memory modules can provide a moderate bump in performance, at the cost of slower training speeds. The memory-augmented architectures described above should be modified for increased computational speed and a reduced number of parameters, in order to make each memory architecture more feasible to incorporate into future dialogue designs.",
"In future work, the memory module could be applied to other domains such as summary generation. While memory modules are able to capture neural vectors of information, they may not easily capture specific words for later use. A possible future approach might combine memory module architectures with pointer softmax networks BIBREF16 to allow memory models to store information about which words from previous utterances of the conversation to use in future responses."
],
[
"We construct a vocabulary of size 50,000 (pruning less frequent tokens) from the chosen Ubuntu Dialogues Corpus, and represent all missing tokens using a special unknown symbol <unk>. When processing conversations for input into a sequence-to-sequence based model, we split each conversation history into history and response, where response is the final utterance. To clarify, all utterances in each conversation history are separated by a special <",
"s>symbol. A maximum of 170 tokens are allocated for the input history and 30 tokens are allocated for the maximum output response.",
"When inputting conversation dialogues into a language model-based implementation, the entire conversation history is kept intact, and is formatted for a maximum conversation length of 200 tokens. As for all maximum lengths specified here, an utterance which exceeds the maximum length is pruned, and extra tokens are not included in the perplexity calculation. This is likely not an issue, as perplexity calculations are per-word and include the end of sequence token."
],
[
"All models were trained using the Adam optimizer BIBREF23 and updated using a learning rate of 0.0001. All models used a batch size of 32, acceptable for the computational resources available. We develop all models within the deep learning framework Pytorch. To keep computation feasible, we train all models for one epoch. For reference, the NTM-LM architecture took over two and a half days of training for one epoch with the parameters specified."
],
[
"In our preliminary experiment, each of the NTMs in the D-NTMS architecture were chosen to have 1 read head and 1 write head. The number of memory slots is 20. The capacity of each slot is 512, the same as the decoder GRU state dimensionality. Each has an LSTM controller, and the size is chosen to be 512 as well. These parameters are consistent for the NTM-LM architecture as well.",
"All sequence-to-sequence models utilized a GRU encoder size of 200, with a decoder GRU size of 400. All language models used a decoder of size 400. The encoder hidden size of the HRED model was set to 400 hidden units. The input embedding size to all models is 200, with only the HRED architecture randomly initializing these embeddings with pre-trained GloVe vectors. The sequence-to-sequence architecture learns separate input word embeddings for encoder and decoder.",
"Each Neural Turing Machine uses 8 heads for reading and writing, with each head having a size of 64 hidden units In the case of the NTM-LM architecture, 32 memory slots are available for storage by the model. When breaking GPU computation to read and write from the NTM, we break the input conversation into segments of size 20 with NTM communication in-between segments. In contrast, the D-NTMS architecture uses a segment size of 5, and breaks up the conversation in utterances which fit in each segment."
]
],
"section_name": [
"Introduction",
"Recent Work",
"Dual-NTM Seq2Seq Dialogue Architecture",
"NTM Language Model Dialogue Architecture",
"Baselines",
"Evaluation",
"Results",
"Discussion",
"Conclusion",
"Appendix ::: Preprocessing",
"Appendix ::: Training/Parameters",
"Appendix ::: Layer Dimensions"
]
} | {
"answers": [
{
"annotation_id": [
"2712cc9912c64a3e1d8a52deb4095df62ff2e998",
"b13df11a696961dc80333e851907fbab0c24ba1d"
],
"answer": [
{
"evidence": [
"In future work, the memory module could be applied to other domains such as summary generation. While memory modules are able to capture neural vectors of information, they may not easily capture specific words for later use. A possible future approach might combine memory module architectures with pointer softmax networks BIBREF16 to allow memory models to store information about which words from previous utterances of the conversation to use in future responses."
],
"extractive_spans": [
"memory module could be applied to other domains such as summary generation",
"future approach might combine memory module architectures with pointer softmax networks"
],
"free_form_answer": "",
"highlighted_evidence": [
"In future work, the memory module could be applied to other domains such as summary generation. While memory modules are able to capture neural vectors of information, they may not easily capture specific words for later use. A possible future approach might combine memory module architectures with pointer softmax networks BIBREF16 to allow memory models to store information about which words from previous utterances of the conversation to use in future responses."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Now we will discuss the memory-augmented D-NTMS architecture. The memory-augmented architecture improved performance above the baseline sequence-to-sequence architecture. As such, it is likely that the memory modules were able to store valuable information about the conversation, and were able to draw on that information during the decoder phase. One drawback of the memory enhanced model is that training was significantly slower. For this reason, model simplification is required in the future to make it more practical. In addition, the NTM has a lot of parameters and some of them may be redundant or damaging. In the DNTM-S system, we may not need to access the NTM at each step of decoding either. Instead, it can be accessed in some intervals of time steps, and the output is used for all steps within the interval.",
"Of all models, the HRED architecture utilized pre-trained GloVe vectors as an initialization for its input word embedding matrix. This feature likely improved performance of the HRED in comparison to other systems, such as the vanilla sequence-to-sequence. However, in separate experiments, GloVe vectors only managed a 5% coverage of all words in the vocabulary. This low number is likely due to the fact that the Ubuntu Dialogues corpus contains heavy terminology from the Ubuntu operating system and user packages. In addition, the Ubuntu conversations contain a significant amount of typos and grammar errors, further complicating analysis. Context-dependent embeddings such as ElMo BIBREF15 may help alleviate this issue, as character-level RNNs can better deal with typos and detect sub word-level elements such morphemes.",
"We establish memory modules as a valid means of storing relevant information for dialogue coherence, and show improved performance when compared to the sequence-to-sequence baseline and vanilla language model. We establish that augmenting these baseline architectures with NTM memory modules can provide a moderate bump in performance, at the cost of slower training speeds. The memory-augmented architectures described above should be modified for increased computational speed and a reduced number of parameters, in order to make each memory architecture more feasible to incorporate into future dialogue designs."
],
"extractive_spans": [],
"free_form_answer": "Strategies to reduce number of parameters, space out calls over larger time intervals and use context dependent embeddings.",
"highlighted_evidence": [
"One drawback of the memory enhanced model is that training was significantly slower. For this reason, model simplification is required in the future to make it more practical. In addition, the NTM has a lot of parameters and some of them may be redundant or damaging. In the DNTM-S system, we may not need to access the NTM at each step of decoding either. Instead, it can be accessed in some intervals of time steps, and the output is used for all steps within the interval.",
" Context-dependent embeddings such as ElMo BIBREF15 may help alleviate this issue, as character-level RNNs can better deal with typos and detect sub word-level elements such morphemes.",
"The memory-augmented architectures described above should be modified for increased computational speed and a reduced number of parameters, in order to make each memory architecture more feasible to incorporate into future dialogue designs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"10ead9a6719b8fc6d52d055c0b9f52ace76c9897",
"3786f997d4de451430b91947fe66d62f95358f0c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Word-level perplexity evaluation on proposed model and two selected baselines."
],
"extractive_spans": [],
"free_form_answer": "9.2% reduction in perplexity",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Word-level perplexity evaluation on proposed model and two selected baselines."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"After one epoch of training, the perplexity evaluated on the validation set was 68.50 for the proposed memory-augmented NTM-LM architecture. This is a 0.68 perplexity improvement over the vanilla language model without the NTM augmentation."
],
"extractive_spans": [
"This is a 0.68 perplexity improvement over the vanilla language model without the NTM augmentation."
],
"free_form_answer": "",
"highlighted_evidence": [
"After one epoch of training, the perplexity evaluated on the validation set was 68.50 for the proposed memory-augmented NTM-LM architecture. This is a 0.68 perplexity improvement over the vanilla language model without the NTM augmentation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a06fc00e2538c8e8759bf4cb1316f4f4ee005481",
"fe17c3e94491a38bd3e3e84ae0fc4b13975e2ccb"
],
"answer": [
{
"evidence": [
"The best performing model was the NTM-LM architecture. While the model received the best performance in perplexity, it demonstrated only a one-point improvement over the existing language model architecture. While in state-of-the-art comparisons a one point difference can be significant, it does indicate that the proposed NTM addition to the language model only contributed a small improvement. It is possible that the additional NTM module was too difficult to train, or that the NTM module injected noise into the input of the GRU such that training became difficult. It is still surprising that the NTM was not put to better use, for performance gains. It is possible the model has not been appropriately tuned."
],
"extractive_spans": [
"NTM-LM"
],
"free_form_answer": "",
"highlighted_evidence": [
"The best performing model was the NTM-LM architecture."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"See Table TABREF3 for details on model and baseline perplexity. To begin, it is worth noting that all of the above architectures were trained in a similar environment, with the exception of HRED, which was trained using an existing Github implementation implementation. Overall, the NTM-LM architecture performed the best of all model architectures, whereas the sequence-to-sequence architecture performed the worst. The proposed NTM-LM outperformed the DNTM-S architecture."
],
"extractive_spans": [
" NTM-LM"
],
"free_form_answer": "",
"highlighted_evidence": [
"The proposed NTM-LM outperformed the DNTM-S architecture."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"What is possible future improvement for proposed method/s?",
"What is percentage change in performance for better model when compared to baseline?",
"Which of two design architectures have better performance?"
],
"question_id": [
"ce8d8de78a21a3ba280b658ac898f73d0b52bf1b",
"e069fa1eecd711a573c0d5c83a3493f5f04b1d8a",
"8db11d9166474a0e98b99ac7f81d1f14539d79ec"
],
"question_writer": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Memory-augmented dialogue architecture with dual NTMs (D-NTMS). GRU encoders read each utterance in the conversation in segments. After reading each segment, a write is made to the corresponding Neural Turing Machine memory module (NTM). Two NTMs are designated, one for each speaker in the conversation (two speakers total). The resulting NTMs are read from and their predictions are used to output the final response prediction.",
"Figure 2: Proposed single-NTM language model dialogue system (NTM-LM). The input dialogue history is broken into segments and each is processed by a GRU language model in sequence. At the end of each segment, the GRU state is used to read from and write to the persistent Neural Turing Machine (NTM).",
"Table 1: Word-level perplexity evaluation on proposed model and two selected baselines.",
"Figure 3: Example loss values for both training and validation datasets, over the course of model training. Displayed for NTM-LM model specifically, but similar loss curves were observed for all models.",
"Table 2: Samples decoded with random sampling from the best-performing NTM-LM architecture. First column shows the message history, with the second column showing a model response. Due to the nature of the Ubuntu Dialogue Corpus, the terminology is complex."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"5-Table1-1.png",
"6-Figure3-1.png",
"12-Table2-1.png"
]
} | [
"What is possible future improvement for proposed method/s?",
"What is percentage change in performance for better model when compared to baseline?"
] | [
[
"1910.10487-Discussion-4",
"1910.10487-Conclusion-1",
"1910.10487-Discussion-1",
"1910.10487-Conclusion-0"
],
[
"1910.10487-5-Table1-1.png",
"1910.10487-Results-1"
]
] | [
"Strategies to reduce number of parameters, space out calls over larger time intervals and use context dependent embeddings.",
"9.2% reduction in perplexity"
] | 318 |
2002.09616 | "Wait, I'm Still Talking!"Predicting the Dialogue Interaction Behavior Using Imagine-Then-Arbitrate Model | Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round. However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper, we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models. | {
"paragraphs": [
[
"All species are unique, but languages make humans uniquest BIBREF0. Dialogues, especially spoken and written dialogues, are fundamental communication mechanisms for human beings. In real life, tons of businesses and entertainments are done via dialogues. This makes it significant and valuable to build an intelligent dialogue product. So far there are quite a few business applications of dialogue techniques, e.g. personal assistant, intelligent customer service and chitchat companion.",
"The quality of response is always the most important metric for dialogue agent, targeted by most existing work and models searching the best response. Some works incorporate knowledge BIBREF1, BIBREF2 to improve the success rate of task-oriented dialogue models, while some others BIBREF3 solve the rare words problem and make response more fluent and informative.",
"Despite the heated competition of models, however, the pace of interaction is also important for human-computer dialogue agent, which has drawn less or no attention. Figure FIGREF1 shows a typical dialogue fragment in an instant message program. A user is asking the service about the schedule of the theater. The user firstly says hello (U11) followed by demand description (U12), and then asks for suggested arrangement (U13), each of which is sent as a single message in one turn. The agent doesn't answer (A2) until the user finishes his description and throws his question. The user then makes a decision (U21) and asks a new question (U22). And then the agent replies with (A3). It's quite normal and natural that the user sends several messages in one turn and the agent waits until the user finished his last message, otherwise the pace of the conversation will be messed up. However, existing dialogue agents can not handle well when faced with this scenario and will reply to every utterance received immediately.",
"There are two issues when applying existing dialogue agents to real life conversation. Firstly, when user sends a short utterance as the start of a conversation, the agent has to make a decision to avoid generating bad responses based on semantically incomplete utterance. Secondly, dialogue agent cutting in the conversation at an unreasonable time could confuse user and mess up the pace of conversation, leading to nonsense interactions.",
"To address these two issues, in this paper, we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to recognize if it is the appropriate moment for agent to reply when agent receives a message from the user. In our method, we have two imaginator modules and an arbitrator module. Imaginators will learn both of the agent's and user's speaking styles respectively. The arbitrator will use the dialogue history and the imagined future utterances generated by the two imaginators to decide whether the agent should wait user or make a response directly.",
"In summary, this paper makes the following contributions:",
"We first addressed an interaction problem, whether the dialogue model should wait for the end of the utterance or make a response directly in order to simulate real life conversation and tried several popular baseline models to solve it.",
"We proposed a novel Imagine-then-Arbitrate (ITA) neural dialogue model to solve the problem mentioned above, based on both of the historical conversation information and the predicted future possible utterances.",
"We modified two popular dialogue datasets to simulate the real human dialogue interaction behavior.",
"Experimental results demonstrate that our model performs well on addressing ending prediction issue and the proposed imaginator modules can significantly help arbitrator outperform baseline models."
],
[
"Creating a perfect artificial human-computer dialogue system is always a ultimate goal of natural language processing. In recent years, deep learning has become a basic technique in dialogue system. Lots of work has investigated on applying neural networks to dialogue system's components or end-to-end dialogue frameworks BIBREF4, BIBREF5. The advantage of deep learning is its ability to leverage large amount of data from internet, sensors, etc. The big conversation data and deep learning techniques like SEQ2SEQ BIBREF6 and attention mechanism BIBREF7 help the model understand the utterances, retrieve background knowledge and generate responses."
],
[
"Though end-to-end methods play a more and more important role in dialogue system, the text classification modules BIBREF8, BIBREF9 remains very useful in many problems like emotion recognition BIBREF10, gender recognition BIBREF11, verbal intelligence, etc. There have been several widely used text classification methods proposed, e.g. Recurrent Neural Networks (RNNs) and CNNs. Typically RNN is trained to recognize patterns across time, while CNN learns to recognize patterns across space. BIBREF12 proposed TextCNNs trained on top of pre-trained word vectors for sentence-level classification tasks, and achieved excellent results on multiple benchmarks.",
"Besides RNNs and CNNs, BIBREF13 proposed a new network architecture called Transformer, based solely on attention mechanism and obtained promising performance on many NLP tasks. To make the best use of unlabeled data, BIBREF14 introduced a new language representation model called BERT based on transformer and obtained state-of-the-art results."
],
[
"Different from retrieval method, Natural Language Generation (NLG) tries converting a communication goal, selected by the dialogue manager, into a natural language form. It reflects the naturalness of a dialogue system, and thus the user experience. Traditional template or rule-based approach mainly contains a set of templates, rules, and hand-craft heuristics designed by domain experts. This makes it labor-intensive yet rigid, motivating researchers to find more data-driven approaches BIBREF15, BIBREF2 that aim to optimize a generation module from corpora, one of which, Semantically Controlled LSTM (SC-LSTM) BIBREF16, a variant of LSTM BIBREF17, gives a semantic control on language generation with an extra component."
],
[
"In this section we will describe the task by taking a scenario and then define the task formally.",
"As shown in Figure FIGREF1, we have two participants in a conversation. One is the dialogue agent, and the other is a real human user. The agent's behavior is similar to most chatbots, except that it doesn't reply on every sentence received. Instead, this agent will judge to find the right time to reply.",
"Our problem is formulated as follows. There is a conversation history represented as a sequence of utterances: $X = \\lbrace x_1, x_2, ..., x_m\\rbrace $, where each utterance $x_i$ itself is a sequence of words $x_{i_1}, x_{i_2}, x_{i_3}...x_{i_n}$. Besides, each utterance has some additional tags:",
"turn tags $t_0, t_1, t_2 ... t_k$ to show which turn this utterance is in the whole conversation.",
"speakers' identification tags $agent$ or $user$ to show who sends this utterance.",
"subturn tags ${st}_0, {st}_1, {st}_2 ... {st}_j$ for user to indicate which subturn an utterance $t_i$is in. Note that an utterance will be labelled as ${st}_0$ even if it doesn't have one.",
"Now, given a dialogue history $X$ and tags $T$, the goal of the model is to predict a label $Y \\in \\lbrace 0,1\\rbrace $, the action the agent would take, where $Y = 0$ means the agent will wait the user for next message, and $Y = 1$ means the agent will reply immediately. Formally we are going to maximize following probability:"
],
[
"Basically, the task can be simplified as a simple text classification problem. However, traditional classification models only use the dialogue history $X$ and predict ground truth label. The ground truth label actually ignores all context information in the next utterance. To make the best use of training data, we propose a novel Imagine-then-Arbitrate (ITA) model taking $X$, ground truth label, and the future possible $X^{\\prime }$ into consideration. In this section, we will describe the architecture of our model and how it works in detail."
],
[
"An imaginator is a natural language generator generating next sentence given the dialogue history. There are two imaginators in our method, agent's imaginator and user's imaginator. The goal of the two imaginators are to learn the agent’s and user’s speaking style respectively and generate possible future utterances.",
"As shown in Figure FIGREF7 (a), imaginator itself is a sequence generation model. We use one-hot embedding to convert all words and relative tags, e.g. turn tags and place holders, to one-hot vectors $w_n \\in \\textbf {R}^V$, where $V$ is the length of vocabulary list. Then we extend each word $x_{i_j}$ in utterance $x_i$ by concatenating the token itself with turn tag, identity tag and subturn tag. We adopt SEQ2SEQ as the basic architecture and LSTMs as the encoder and decoder networks. LSTMs will encode each extended word $w_t$ as a continuous vector $h_t$ at each time step $t$. The process can be formulated as following:",
"where $e(w_t)$ is the embedding of the extended word $w_t$, $W_f$, $U_f$, $W_i$, $U_i$, $W_o$, $U_o$, $W_g$, $U_g$ and $b$ are learnt parameters.",
"Though trained on the same dataset, the two imaginators learn different roles independently. So in the same piece of dialogue, we split it into different samples for different imaginators. For example, as shown in Figure FIGREF1 and FIGREF7 (a), we use utterance (A1, U11, U12) as dialogue history input and U13 as ground truth to train the user imaginator and use utterance (A1, U11, U12, U13) as dialogue history and A2 as ground truth to train the agent imaginator.",
"During training, the encoder runs as equation DISPLAY_FORM15, and the decoder is the same structured LSTMs but $h_t$ will be fed to a Softmax with $W_{v} \\in {\\textbf {R}^{h \\times V}}, b_{v} \\in {\\textbf {R}^\\textbf {V}}$, which will produce a probability distribution $p_{t}$ over all words, formally:",
"the decoder at time step t will select the highest word in $p_{t}$, and our imaginator's loss is the sum of the negative log likelihood of the correct word at each step as follows:",
"where $N$ is the length of the generated sentence. During inference, we also apply beam search to improve the generation performance.",
"Finally, the trained agent imaginator and user imaginator are obtained."
],
[
"The arbitrator module is fundamentally a text classifier. However, in this task, we make the module maximally utilize both dialogue history and ground truth's semantic information. So we turned the problem of maximizing $Y$ from $X$ in equation (DISPLAY_FORM13) to:",
"where $\\textbf {IG}_{agent}$ and $\\textbf {IG}_{user}$ are the trained agent imaginator and user imaginator respectively, and $R^{\\prime }$ is a selection indicator where $R^{\\prime } = 1$ means selecting $R_{agent}$ whereas 0 means selecting $R_{user}$. And Thus we (1) introduce the generation ground truth semantic information and future possible predicted utterances (2) turn the label prediction problem into a response selection problem.",
"We adopt several architectures like Bi-GRUs, TextCNNs and BERT as the basis of arbitrator module. We will show how to build an arbitrator by taking TextCNNs as an example.",
"As is shown in Figure FIGREF7, the three CNNs with same structure take the inferred responses $R_{agent}$, $R_{user}$ and dialogue history $X$, tags $T$. For each raw word sequence $x_1,...,x_n$, we embed each word as one-hot vector $w_{i} \\in \\textbf {R}^V$. By looking up a word embedding matrix $E \\in \\textbf {R}^{V \\times d}$, the input text is represented as an input matrix $Q \\in \\textbf {R}^{l \\times d}$, where $l$ is the length of sequence of words and $d$ is the dimension of word embedding features. The matrix is then fed into a convolution layer where a filter $\\textbf {w} \\in \\textbf {R}^{k \\times d}$ is applied:",
"where $Q_{i:i+k-1}$ is the window of token representation and the function $f$ is $ReLU$, $W$ and $b$ are learnt parameters. Applying this filter to $m$ possible $Q_{i:i+k-1}$ obtains a feature map:",
"where $\\textbf {c} \\in \\textbf {R}^{l-k+1}$ for $m$ filters. And we use $j \\in \\textbf {R} $ different size of filters in parallel in the same convolution layer. This means we will have $m_1, m_2, \\dots , m_j$ windows at the same time, so formally:",
", then we apply max-over-time pooling operation to capture the most important feature:",
", and thus we get the final feature map of the input sequence.",
"We apply same CNNs to get the feature maps of $X$, $R_{agent}$ and $R_{user}$:",
"where function TextCNNs() follows as equations from DISPLAY_FORM20 to DISPLAY_FORM23. Then we will have two possible dialogue paths, $X$ with $R_{agent}$ and $X$ with $R_{user}$, representations $D_{agent}$ and $D_{user}$:",
"And then, the arbitrator will calculate the probability of the two possible dialogue paths:",
"Through learnt parameters $W_{4}$ and $b_{4}$, we will get a two-dimensional probability distribution $P$, in which the most reasonable response has the max probability. This also indicates whether the agent should wait or not.",
"And the total loss function of the whole attribution module will be negative log likelihood of the probability of choosing the correct action:",
"where $N$ is the number of samples and $Y_{i}$ is the ground truth label of i-th sample.",
"The arbitrator module based on Bi-GRU and BERT is implemented similar to TextCNNs."
],
[
"As the proposed approach mainly concentrates on the interaction of human-computer, we select and modify two very different style datasets to test the performance of our method. One is a task-oriented dialogue dataset MultiWoz 2.0 and the other is a chitchat dataset DailyDialogue . Both datasets are collected from human-to-human conversations. We evaluate and compare the results with the baseline methods in multiple dimensions. Table TABREF28 shows the statistics of datasets.",
"MultiWOZ 2.0 BIBREF18. MultiDomain Wizard-of-Oz dataset (MultiWOZ) is a fully-labeled collection of human-human written conversations. Compared with previous task-oriented dialogue datasets, e.g. DSTC 2 BIBREF19 and KVR BIBREF20, it is a much larger multi-turn conversational corpus and across serveral domains and topics: It is at least one order of magnitude larger than all previous annotated task-oriented corpora, with dialogues spanning across several domains and topics.",
"DailyDialogue BIBREF21. DailyDialogue is a high-quality multi-turn dialogue dataset, which contains conversations about daily life. In this dataset, humans often first respond to previous context and then propose their own questions and suggestions. In this way, people show their attention others’ words and are willing to continue the conversation. Compare to the task-oriented dialogue datasets, the speaker's behavior will be more unpredictable and complex for the arbitrator."
],
[
"Because the task we concentrate on is different from traditional ones, to make the datasets fit our problems and real life, we modify the datasets with the following steps:",
"Drop Slots and Values For task-oriented dialogue, slot labels are important for navigating the system to complete a specific task. However, those labels and accurate values from ontology files will not benefit our task essentially. So we replace all specific values with a slot placeholder in preprocessing step.",
"Split Utterances Existing datasets concentrate on the dialogue content, combining multiple sentences into one utterance each turn when gathering the data. In this step, we randomly split the combined utterance into multiple utterances according to the punctuation. And we set a determined probability to decide if the preprocessing program should split a certain sentence.",
"Add Turn Tag We add turn tags, subturn tags and role tags to each split and original sentences to (1) label the speaker role and dialogue turns (2) tag the ground truth for training and testing the supervised baselines and our model.",
"Finally, we have the modified datasets which imitate the real life human chatting behaviors as shown in Figure FIGREF1. Our datasets and code will be released to public for further researches in both academic and industry."
],
[
"To compare with dataset baselines in multiple dimensions and test the model's performance, we use the overall Bilingual Evaluation Understudy (BLEU) BIBREF22 to evaluate the imaginators' generation performance. As for arbitrator, we use accuracy score of the classification to evaluate. Accuracy in our experiments is the correct ratio in all samples."
],
[
"The hyper-parameter settings adopted in baselines and our model are the best practice settings for each training set. All models are tested with various hyper-parameter settings to get their best performance. Baseline models are Bidirectional Gated Recurrent Units (Bi-GRUs) BIBREF23, TextCNNs BIBREF12 and BERT BIBREF14."
],
[
"In Table TABREF29, we show different imaginators' generation abilities and their performances on the same TextCNN based arbitrator. Firstly, we gathered the results of agent and user imaginators' generation based on LSTM, LSTM-attention and LSTM-attention with GLOVE pretrained word embedding. According to the evaluation metric BLEU, the latter two models achieve higher but similar results. Secondly, when fixed the arbitrator on the TextCNNs model, the latter two also get the similar results on accuracy and significantly outperform the others including the TextCNNs baseline.",
"The performances on different arbitrators with the same LSTM-attention imaginators are shown in Table TABREF30. From those results, we can directly compared with the corresponding baseline models. The imaginators with BERT based arbitrator make the best results in both datasets while all ITA models beat the baseline models.",
"We also present an example of how our model runs in Table TABREF37. Imaginators predict the agent and user's utterance according to the dialogue history(shown in model prediction), and then arbitrator selects the user imaginator's prediction that is more suitable with the dialogue history. It is worth noting that the arbitrator generates a high-quality sentence again if only considering the generation effect. However, referring to the dialogue history, it is not a good choice since its semantic is repeated in the last turn by the agent."
],
[
"From Table TABREF30, we can see that not only our BERT based model get the best results in both datasets, the other two models also significantly beat the corresponding baselines. Even the TextCNNs based model can beat all baselines in both datasets.",
"Table TABREF29 figures out experiment results on MultiWOZ dataset. The LSTM based agent imaginator get the BLEU score at 11.77 on agent samples, in which the ground truth is agents' utterances, and 0.80 on user samples. Meanwhile, the user imaginator get the BLEU score at 0.3 on agent samples and 8.87 on user target samples. Similar results are shown in other imaginators' expermients. Although these comparisons seem unfair to some extends since we do not have the agent and user's real utterances at the same time and under the same dialogue history, these results show that the imaginators did learn the speaking style of agent and user respectively. So the suitable imaginator's generation will be more similar to the ground truth, such an example shown in Table TABREF37, which means this response more semantically suitable given the dialogue history.",
"If we fix the agent and user imaginators' model, as we take the LSTM-attention model, the arbitrators achieve different performances on different models, shown in Table TABREF30. As expected, ITA models beat their base models by nearly 2 $\\sim $ 3% and ITA-BERT model beats all other ITA models.",
"So from the all results, we can conclude that imaginators will significantly help the arbitrator in predicting the dialogue interaction behavior using the future possible agent and user responses’ semantic information."
],
[
"As shown in the DailyDialogue dataset of Table TABREF29, we can see that attention mechanism works in learning the generation task. LSTMs -Attention and LSTMs-attention-GLOVE based imaginators get more than 19 and 24 BLEU scores in corresponding target, while the LSTMs without attention gets only 4.51 and 8.70. These results also impact on the arbitrator results. The imaginator with attention mechanism get an accuracy score of 79.02 and 78.56, significantly better than the others. The evidence also exists in the results on MultiWoz. All imaginators get similar generation performance, so the arbitrators gets the similar accuracy scores.",
"From those results, we can conclude that there is positive correlation between the performance of imaginators and arbitrators. However, there still exists problems. It's not easy to evaluate the dialogue generation's performance. In the results of MultiWoz, we can see that LSTMs-GLOVE based ITA performs a little better than LSTMs-attention based ITA, but not the results of the arbitrator are opposite. This may indicate that (1) when the imaginators' performance is high enough, the arbitrator's performance will be stable and (2) the BLEU score will not perfectly present the contribution to the arbitrator. We leave these hypotheses in future work."
],
[
"We first address an interaction problem, whether the dialogue model should wait for the end of the utterance or reply directly in order to simulate user's real life conversation behavior, and propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to deal with it. Our model introduces the imagined future possible semantic information for prediction. We modified two popular dialogue datasets to fit in the real situation. It is reasonable that additional information is helpful for arbitrator, despite its fantasy."
]
],
"section_name": [
"Introduction",
"Related Work ::: Dialogue System",
"Related Work ::: Classification in Dialogue",
"Related Work ::: Dialogue Generation",
"Task Definition",
"Proposed Framework",
"Proposed Framework ::: Imaginator",
"Proposed Framework ::: Arbitrator",
"Experimental Setup ::: Datasets",
"Experimental Setup ::: Datasets Modification",
"Experimental Setup ::: Evaluation Method",
"Experimental Setup ::: Baselines and Training Setup",
"Experimental Results and Analysis ::: Results",
"Experimental Results and Analysis ::: Analysis ::: Imaginators Benefit the Performance",
"Experimental Results and Analysis ::: Analysis ::: Relation of Imaginators and Arbitrator's Performance",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"94f4c1acb40f3daaa6927bbb0da20e32e89edc9c",
"d3872b481ee787396c1ba6ab0a7aa8694611ffd0"
],
"answer": [
{
"evidence": [
"To compare with dataset baselines in multiple dimensions and test the model's performance, we use the overall Bilingual Evaluation Understudy (BLEU) BIBREF22 to evaluate the imaginators' generation performance. As for arbitrator, we use accuracy score of the classification to evaluate. Accuracy in our experiments is the correct ratio in all samples."
],
"extractive_spans": [
"Bilingual Evaluation Understudy (BLEU) BIBREF22",
"accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"To compare with dataset baselines in multiple dimensions and test the model's performance, we use the overall Bilingual Evaluation Understudy (BLEU) BIBREF22 to evaluate the imaginators' generation performance. As for arbitrator, we use accuracy score of the classification to evaluate. Accuracy in our experiments is the correct ratio in all samples."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To compare with dataset baselines in multiple dimensions and test the model's performance, we use the overall Bilingual Evaluation Understudy (BLEU) BIBREF22 to evaluate the imaginators' generation performance. As for arbitrator, we use accuracy score of the classification to evaluate. Accuracy in our experiments is the correct ratio in all samples."
],
"extractive_spans": [
"BLEU",
"accuracy score"
],
"free_form_answer": "",
"highlighted_evidence": [
"To compare with dataset baselines in multiple dimensions and test the model's performance, we use the overall Bilingual Evaluation Understudy (BLEU) BIBREF22 to evaluate the imaginators' generation performance. As for arbitrator, we use accuracy score of the classification to evaluate."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"999eabf582c628a5ef3e811201f7d4a039095540",
"c3f0f9960067c813c3e3518d4690d2b8bf79ea4f"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Accuracy Results on Two datasets. Better results between baselines and corresponding ITA models are in BOLD and best results on datasets are in RED. Random result is the accuracy of script that making random decisions."
],
"extractive_spans": [],
"free_form_answer": "Best model outperforms baseline by 1.98% on MultiWoz dataset and .67% on DailyDialogue dataset",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Accuracy Results on Two datasets. Better results between baselines and corresponding ITA models are in BOLD and best results on datasets are in RED. Random result is the accuracy of script that making random decisions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Accuracy Results on Two datasets. Better results between baselines and corresponding ITA models are in BOLD and best results on datasets are in RED. Random result is the accuracy of script that making random decisions.",
"The performances on different arbitrators with the same LSTM-attention imaginators are shown in Table TABREF30. From those results, we can directly compared with the corresponding baseline models. The imaginators with BERT based arbitrator make the best results in both datasets while all ITA models beat the baseline models."
],
"extractive_spans": [],
"free_form_answer": "Best accuracy result of proposed model is 82.73, 79.35 compared to best baseline result of 80.75, 78.68 on MultiWoz and DailyDialogue datasets respectively.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Accuracy Results on Two datasets. Better results between baselines and corresponding ITA models are in BOLD and best results on datasets are in RED. Random result is the accuracy of script that making random decisions.",
"The performances on different arbitrators with the same LSTM-attention imaginators are shown in Table TABREF30. From those results, we can directly compared with the corresponding baseline models. The imaginators with BERT based arbitrator make the best results in both datasets while all ITA models beat the baseline models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"119eb9ce95c781358d3036ab3a37375cbeed5f3c",
"728956d59b4adce6000a6a812e266f0a3804d389"
],
"answer": [
{
"evidence": [
"The hyper-parameter settings adopted in baselines and our model are the best practice settings for each training set. All models are tested with various hyper-parameter settings to get their best performance. Baseline models are Bidirectional Gated Recurrent Units (Bi-GRUs) BIBREF23, TextCNNs BIBREF12 and BERT BIBREF14."
],
"extractive_spans": [
"Bidirectional Gated Recurrent Units (Bi-GRUs) BIBREF23, TextCNNs BIBREF12 and BERT BIBREF14"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baseline models are Bidirectional Gated Recurrent Units (Bi-GRUs) BIBREF23, TextCNNs BIBREF12 and BERT BIBREF14."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The hyper-parameter settings adopted in baselines and our model are the best practice settings for each training set. All models are tested with various hyper-parameter settings to get their best performance. Baseline models are Bidirectional Gated Recurrent Units (Bi-GRUs) BIBREF23, TextCNNs BIBREF12 and BERT BIBREF14."
],
"extractive_spans": [
"Bi-GRUs",
"TextCNNs",
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baseline models are Bidirectional Gated Recurrent Units (Bi-GRUs) BIBREF23, TextCNNs BIBREF12 and BERT BIBREF14."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"312d015e8a246d60038f9cbf4d8b65aa7fe04104",
"7b9e995b679b0582228c82cc34a3e87a40dda666"
],
"answer": [
{
"evidence": [
"As the proposed approach mainly concentrates on the interaction of human-computer, we select and modify two very different style datasets to test the performance of our method. One is a task-oriented dialogue dataset MultiWoz 2.0 and the other is a chitchat dataset DailyDialogue . Both datasets are collected from human-to-human conversations. We evaluate and compare the results with the baseline methods in multiple dimensions. Table TABREF28 shows the statistics of datasets."
],
"extractive_spans": [
"human-to-human conversations"
],
"free_form_answer": "",
"highlighted_evidence": [
"Both datasets are collected from human-to-human conversations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As the proposed approach mainly concentrates on the interaction of human-computer, we select and modify two very different style datasets to test the performance of our method. One is a task-oriented dialogue dataset MultiWoz 2.0 and the other is a chitchat dataset DailyDialogue . Both datasets are collected from human-to-human conversations. We evaluate and compare the results with the baseline methods in multiple dimensions. Table TABREF28 shows the statistics of datasets."
],
"extractive_spans": [
"MultiWoz 2.0",
"DailyDialogue"
],
"free_form_answer": "",
"highlighted_evidence": [
"As the proposed approach mainly concentrates on the interaction of human-computer, we select and modify two very different style datasets to test the performance of our method. One is a task-oriented dialogue dataset MultiWoz 2.0 and the other is a chitchat dataset DailyDialogue . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What evaluation metrics did they use?",
"By how much does their model outperform the baseline?",
"Which models did they compare with?",
"What is the source of their datasets?"
],
"question_id": [
"fa5f5f58f6277a1e433f80c9a92a5629d6d9a271",
"3b9da1af1550e01d2e6ba2b9edf55a289f5fa8e2",
"f88f45ef563ea9e40c5767ab2eaa77f4700f95f8",
"99e99f2c25706085cd4de4d55afe0ac43213d7c8"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: A multi-turn dialogue fragment. In this case, user sends splited utterances in a turn, e.g. split U1 to {U11, U12 and U13}",
"Figure 2: Model Overview. (a) Train the agent and user imaginators using the same dialogues but different samples. (b) During training and inference step, arbitrator uses the dialogue history and two trained imaginators’ predictions.",
"Table 1: Datasets Statistics. Note that the statistics are based on the modified dataset described in Section 5.2",
"Table 2: Results of the different imaginators generation performance (in BLEU score) and accuracy score on the same TextCNNs based arbitrator. Better results between imaginators are in BOLD and best results on datasets are in RED.",
"Table 3: Accuracy Results on Two datasets. Better results between baselines and corresponding ITA models are in BOLD and best results on datasets are in RED. Random result is the accuracy of script that making random decisions.",
"Table 4: An Example of The Imaginator’s Generation and arbitrator’s Selection."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png"
]
} | [
"By how much does their model outperform the baseline?"
] | [
[
"2002.09616-Experimental Results and Analysis ::: Results-1",
"2002.09616-5-Table3-1.png"
]
] | [
"Best accuracy result of proposed model is 82.73, 79.35 compared to best baseline result of 80.75, 78.68 on MultiWoz and DailyDialogue datasets respectively."
] | 319 |
1906.06349 | On the Computational Power of RNNs | Recent neural network architectures such as the basic recurrent neural network (RNN) and Gated Recurrent Unit (GRU) have gained prominence as end-to-end learning architectures for natural language processing tasks. But what is the computational power of such systems? We prove that finite precision RNNs with one hidden layer and ReLU activation and finite precision GRUs are exactly as computationally powerful as deterministic finite automata. Allowing arbitrary precision, we prove that RNNs with one hidden layer and ReLU activation are at least as computationally powerful as pushdown automata. If we also allow infinite precision, infinite edge weights, and nonlinear output activation functions, we prove that GRUs are at least as computationally powerful as pushdown automata. All results are shown constructively. | {
"paragraphs": [
[
"Recent work [1] suggests that recurrent “neural network\" models of several types perform better than sequential models in acquiring and processing hierarchical structure. Indeed, recurrent networks have achieved state-of-the-art results in a number of natural language processing tasks, including named-entity recognition [2], language modeling [3], sentiment analysis [4], natural language generation [5], and beyond.",
"The hierarchical structure associated with natural languages is often modeled as some variant of context-free languages, whose languages may be defined over an alphabet INLINEFORM0 . These context-free languages are exactly those that can be recognized by pushdown automata (PDAs). Thus it is natural to ask whether these modern natural language processing tools, including simple recurrent neural networks (RNNs) and other, more advanced recurrent architectures, can learn to recognize these languages.",
"The computational power of RNNs has been studied extensively using empirical testing. Much of this research [8], [9] focused on the ability of RNNs to recognize simple context-free languages such as INLINEFORM0 and INLINEFORM1 , or context-sensitive languages such as INLINEFORM2 . Related works [10], [11], [12] focus instead on Dyck languages of balanced parenthesis, which motivates some of our methods. Gated architectures such as the Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) obtain high accuracies on each of these tasks. While simpler RNNs have also been tested, one difficulty is that the standard hyperbolic tangent activation function makes counting difficult. On the other hand, RNNs with ReLU activations were found to perform better, but suffer from what is known as the “exploding gradient problem\" and thus are more difficult to train [8].",
"Instead of focusing on a single task, many researchers have studied the broader theoretical computational power of recurrent models, where weights are not trained but rather initialized to recognize a desired language. A celebrated result [6] shows that a simple recurrent architecture with 1058 hidden nodes and a saturated-linear activation INLINEFORM0 is a universal Turing Machine, with: INLINEFORM1 ",
"However, their architecture encodes the whole input in its internal state and the relevant computation is only performed after reading a terminal token. This differs from more common RNN variants that consume tokenized inputs at each time step. Furthermore, the authors admit that were the saturated-linear activation to be replaced with the similar and more common sigmoid or hyperbolic tangent activation functions, their methodology would fail.",
"More recent work [7] suggests that single-layer RNNs with rectified linear unit (ReLU) activations and softmax outputs can also be simulated as universal Turing Machines, but this approach again suffers from the assumption that the entire input is read before computation occurs.",
"Motivated by these earlier theoretical results, in this report we seek to show results about the computational power of recurrent architectures actually used in practice - namely, those that read tokens one at a time and that use standard rather than specially chosen activation functions. In particular we will prove that, allowing infinite precision, RNNs with just one hidden layer and ReLU activation are at least as powerful as PDAs, and that GRUs are at least as powerful as deterministic finite automata (DFAs). Furthermore, we show that using infinite edge weights and a non-standard output function, GRUs are also at least as powerful as PDAs."
],
[
"Let a simple RNN be an RNN with the following architecture: INLINEFORM0 ",
" where INLINEFORM0 for all INLINEFORM1 , for some chosen activation function INLINEFORM2 , usually the ReLU or the hyperbolic tangent functions. We assume that the inputs are one-hots of a given set of symbols INLINEFORM3 , vectors of length INLINEFORM4 where each element but one is INLINEFORM5 and the remaining element is INLINEFORM6 .",
"Say that an RNN accepts an input INLINEFORM0 of length INLINEFORM1 if after passing INLINEFORM2 through the RNN, its final output INLINEFORM3 belongs to a predetermined set INLINEFORM4 , for which membership can be tested in INLINEFORM5 time. Let the INLINEFORM6 -language of an RNN consist exactly of all inputs that it accepts given set INLINEFORM7 .",
"In practice, the inputs and hidden nodes of an RNN are stored as numbers with finite precision. Including this restriction, we show the following result:",
"Theorem 1.1. For every language INLINEFORM0 , INLINEFORM1 is regular if and only if INLINEFORM2 is the INLINEFORM3 -language of some finite precision simple RNN.",
"Proof. We begin with the “if\" direction. Suppose we are given some simple RNN and set INLINEFORM0 . It suffices to show that there exists a DFA that accepts the INLINEFORM1 -language of this RNN. Assume that the RNN has INLINEFORM2 hidden nodes, and that these hidden nodes are precise up to INLINEFORM3 bits. Then there are exactly INLINEFORM4 possible hidden states for the RNN. Construct the following DFA with:",
"It's clear that after reading the first INLINEFORM0 inputs of a word INLINEFORM1 , the current state of this DFA is INLINEFORM2 , which immediately completes the proof of this direction.",
"For the “only if\" direction, suppose we have a DFA INLINEFORM0 with corresponding language INLINEFORM1 . We will construct a simple RNN whose inputs are one-hotted symbols from INLINEFORM2 , with ReLU activation function INLINEFORM3 , and with INLINEFORM4 hidden nodes whose INLINEFORM5 -language is INLINEFORM6 .",
"The RNN has three layers: the first layer (input layer) has INLINEFORM0 nodes; the second layer (hidden layer) has INLINEFORM1 nodes; and the third layer (output layer) has one node. For the INLINEFORM2 nodes in the input layer associated with the one-hot of the current symbol, label each node with its corresponding symbol from INLINEFORM3 . Label the INLINEFORM4 hidden nodes (in both the first and second layers) with all INLINEFORM5 symbol-state combinations INLINEFORM6 for INLINEFORM7 and INLINEFORM8 .",
"For every INLINEFORM0 , connect the node in the input layer with label INLINEFORM1 to all nodes in the hidden layer with labels INLINEFORM2 for any INLINEFORM3 with edges with weight INLINEFORM4 . For all INLINEFORM5 , connect the node in the input layer with label INLINEFORM6 to all nodes in the hidden layer with labels INLINEFORM7 where INLINEFORM8 with edges also of weight INLINEFORM9 . Finally, for all INLINEFORM10 , connect the node in the hidden layer with label INLINEFORM11 to the single node in the output layer with an edge of weight INLINEFORM12 .",
"Each of the hidden nodes are initialized to INLINEFORM0 except a single hidden node with label INLINEFORM1 for a randomly chosen INLINEFORM2 , which is initialized to INLINEFORM3 . To complete the description of the RNN, we set INLINEFORM4 and INLINEFORM5 . We claim that the following invariant is maintained: after reading some word, suppose the current state of INLINEFORM6 is INLINEFORM7 . Then after reading the same word, the hidden nodes of the RNN would all be equal to INLINEFORM8 except for one node with label INLINEFORM9 for some INLINEFORM10 , which would equal INLINEFORM11 .",
"We prove the claim by induction on the length of the inputted word INLINEFORM0 . The base case of INLINEFORM1 is trivial. Now assume that after reading a word of length INLINEFORM2 the current state of INLINEFORM3 is INLINEFORM4 , and after reading that same word all hidden nodes of the RNN are equal to INLINEFORM5 except one node with label INLINEFORM6 for some INLINEFORM7 , which is equal to INLINEFORM8 . If the next symbol is INLINEFORM9 , then the current state of INLINEFORM10 would be INLINEFORM11 where INLINEFORM12 . For the RNN, the input layer will have exactly two INLINEFORM13 s, namely the node with label INLINEFORM14 and the node with label INLINEFORM15 . Since all edges have weight INLINEFORM16 , that means that before adding INLINEFORM17 or applying INLINEFORM18 the maximum value a node in the hidden layer can take on is INLINEFORM19 . For this to occur it must be connected to both the nodes in the input layer with value INLINEFORM20 , and thus by definition its label must be INLINEFORM21 . By integrality every other node in the hidden layer will take on a value of at most INLINEFORM22 , so after adding INLINEFORM23 and applying INLINEFORM24 we easily see that the invariant is maintained.",
"Utilizing this invariant it is clear that upon reading a word INLINEFORM0 the RNN will output INLINEFORM1 , and upon reading a word INLINEFORM2 it will output INLINEFORM3 . Thus INLINEFORM4 is precisely the INLINEFORM5 -language of the RNN and the theorem is proven. INLINEFORM6 ",
"Discussion 1.2. This result shows that simple RNNs with finite precision are exactly as computationally powerful as DFAs. In terms of reducing the size of the hidden layer constructed in the proof of the “only if\" direction, it seems likely that INLINEFORM0 is optimal since INLINEFORM1 is defined on INLINEFORM2 inputs and needs to be captured fully by the RNN.",
"Removing the finite precision stipulation unsurprisingly increases the capabilities of RNNs. It is natural to now ask whether these simple RNNs can recognize more complicated INLINEFORM0 -languages, and indeed the answer is affirmative. Thus we shift our focus to context-free languages. We begin with some preliminaries:",
"The Dyck language INLINEFORM0 consists of all words over the size INLINEFORM1 alphabet INLINEFORM2 that correspond to a balanced string of INLINEFORM3 types of parentheses. We also define the set of proper prefixes INLINEFORM4 ",
"so that any word in INLINEFORM0 is the prefix of a word in INLINEFORM1 but is itself unbalanced. We proceed with a motivating theorem:",
"Theorem 1.3 (Chomsky-Sch INLINEFORM0 tzenberger Theorem). Any context-free language INLINEFORM1 can be written as INLINEFORM2 for some INLINEFORM3 and regular language INLINEFORM4 after a suitable relabeling.",
"Proof. The interested reader may find a proof in [13]. INLINEFORM0 ",
"Thus it makes sense to focus on constructing sets INLINEFORM0 and simple RNNs whose INLINEFORM1 -language is INLINEFORM2 . Indeed, since INLINEFORM3 for some homomorphism INLINEFORM4 , we start by focusing on INLINEFORM5 , in some sense the “hardest\" context-free language.",
"The critical idea is to “memorize\" an input in the binary representation of some rational number, simulating a stack. Indeed, consider associating with any word INLINEFORM0 a state INLINEFORM1 , defined as follows: INLINEFORM2 ",
" Consider the word INLINEFORM0 . The evolution of the state as the word is read symbol by symbol is given by INLINEFORM1 ",
"This example makes it clear that this notion of state accurately captures all the relevant information about words in INLINEFORM0 .",
"The difficulty in capturing this notion of state in a RNN is that the constant to multiply INLINEFORM0 by changes depending on the input (it can be either INLINEFORM1 or INLINEFORM2 in our example above). Thus storing INLINEFORM3 in a single hidden node is impossible. Instead, we use two hidden nodes. Below, we generalize from INLINEFORM4 to INLINEFORM5 .",
"Ignoring the output layer for now, consider the simple RNN defined by INLINEFORM0 ",
" where the inputs INLINEFORM0 are INLINEFORM1 one-hots of the symbols in INLINEFORM2 (the alphabet of INLINEFORM3 ) in the order INLINEFORM4 and the hidden states have dimension INLINEFORM5 where INLINEFORM6 ",
" As before, associate with each word INLINEFORM0 a state INLINEFORM1 now satisfying INLINEFORM2 ",
" for all INLINEFORM0 .",
"This is similar to the state we defined before, though now generalized to INLINEFORM0 and also with intentionally present blank space inserted between the digits in base INLINEFORM1 . We will show the following invariant:",
"Lemma 1.4. Given an input word INLINEFORM0 , we have INLINEFORM1 or INLINEFORM2 for all INLINEFORM3 .",
"Proof. We proceed by induction on INLINEFORM0 . The base case of INLINEFORM1 is trivial. Now, suppose INLINEFORM2 for some INLINEFORM3 and assume without loss of generality that INLINEFORM4 . Then INLINEFORM5 ",
"Now, since INLINEFORM0 we have that INLINEFORM1 for any INLINEFORM2 , which follows immediately from the stack interpretation of the base INLINEFORM3 representation of INLINEFORM4 . Thus INLINEFORM5 and so INLINEFORM6 ",
"as desired. Alternatively, suppose INLINEFORM0 for some INLINEFORM1 . Again, assume without loss of generality that INLINEFORM2 . Then INLINEFORM3 ",
"The fact that INLINEFORM0 clearly implies that INLINEFORM1 and so we have that INLINEFORM2 ",
"which completes the induction. INLINEFORM0 ",
"A pictorial example of this RNN is depicted below for INLINEFORM0 :",
"vertex=[circle, draw] [transform shape] vertex](r1) at (-2, 2) INLINEFORM0 ; vertex](r2) at (2, 2) INLINEFORM1 ; vertex](q1) at (-7,-2) INLINEFORM2 ; vertex](q2) at (-5,-2) INLINEFORM3 ; vertex](q3) at (-3,-2) INLINEFORM4 ; vertex](q4) at (-1,-2) INLINEFORM5 ; vertex](h1) at (3,-2) INLINEFORM6 ; vertex](h2) at (7,-2) INLINEFORM7 ; [every path/.style=-, every node/.style=inner sep=1pt] (r1) – node [pos=0.5, anchor=south east] INLINEFORM8 (q1); (r1) – node [pos=0.5, anchor=south east] INLINEFORM9 (q2); (r1) – node [pos=0.7, anchor=north west] INLINEFORM10 (q3); (r1) – node [pos=0.5, anchor=north east] INLINEFORM11 (q4); (r1) – node [pos=0.75, anchor=south west] INLINEFORM12 (h1); (r1) – node [pos=0.65, anchor=south west] INLINEFORM13 (h2); (r2) – node [anchor=south east, pos=0.8] INLINEFORM14 (q1); (r2) – node [anchor=south east, pos=0.8] INLINEFORM15 (q2); (r2) – node [pos=0.5, anchor=south east] INLINEFORM16 (q3); (r2) – node [pos=0.75, anchor=north west] INLINEFORM17 (q4); (r2) – node [pos=0.25, anchor=south west] INLINEFORM18 (h1); (r2) – node [pos=0.5, anchor=south west] INLINEFORM19 (h2);",
"Thus we have found an efficient way to store INLINEFORM0 . Now it's clear that for any INLINEFORM1 we have INLINEFORM2 and for any INLINEFORM3 we have INLINEFORM4 , so it is tempting to try and add a simple output layer to this RNN and claim that its INLINEFORM5 -language is INLINEFORM6 . However, this is most likely impossible to accomplish.",
"Indeed, consider the word INLINEFORM0 . We have that INLINEFORM1 for this word, but INLINEFORM2 . Furthermore, consider the word INLINEFORM3 . We have that INLINEFORM4 for all INLINEFORM5 and INLINEFORM6 for this word, yet INLINEFORM7 . Hence we must be able to flag when an inappropriate closing parenthesis appears in an input and retain that information while reading the rest of the input. To that end, consider the following simple RNN, an example of which can be found in Appendix A.1: INLINEFORM8 ",
" where again the inputs INLINEFORM0 are INLINEFORM1 one-hots of the symbols in INLINEFORM2 (the alphabet of INLINEFORM3 ) in the order INLINEFORM4 and the hidden states have dimension INLINEFORM5 where INLINEFORM6 ",
" Because the last four elements of the first two rows of INLINEFORM0 are all equal to INLINEFORM1 and otherwise the first two rows of INLINEFORM2 and INLINEFORM3 are the same as before, it is clear that Lemma 1.4 still applies in some form for the new simple RNN. Indeed, denoting INLINEFORM4 ",
"we have",
"Corollary 1.5. With respect to a word INLINEFORM0 , we have INLINEFORM1 or INLINEFORM2 for all INLINEFORM3 .",
"We proceed with an important lemma:",
"Lemma 1.6. For any word INLINEFORM0 , there is a unique INLINEFORM1 such that INLINEFORM2 .",
"Proof. This immediately follows from the definition of a balanced string. Indeed, if INLINEFORM0 is the state associated with INLINEFORM1 then this unique INLINEFORM2 is given by INLINEFORM3 ",
" INLINEFORM0 ",
"We are now ready to show the following:",
"Lemma 1.7. Given an input word INLINEFORM0 , we have that INLINEFORM1 .",
"Proof. We first restrict our attention to INLINEFORM0 . Note that INLINEFORM1 ",
"for any INLINEFORM0 , which follows from the definition of INLINEFORM1 and INLINEFORM2 . Then using Corollary 1.5 we find INLINEFORM3 ",
"Now using the inequality in the proof of Lemma 1.6 we immediately obtain INLINEFORM0 as desired.",
"Considering now INLINEFORM0 we notice INLINEFORM1 ",
"and doing an analysis similar to that for INLINEFORM0 , we obtain INLINEFORM1 as desired. INLINEFORM2 ",
"Applying Lemma 1.6 allows us to make the following statement:",
"Lemma 1.8. Given a word INLINEFORM0 , consider the unique INLINEFORM1 such that INLINEFORM2 . Then with respect to a word INLINEFORM3 with INLINEFORM4 , we have INLINEFORM5 . Similarly, with respect to a word INLINEFORM6 with INLINEFORM7 , we have INLINEFORM8 .",
"Proof. First suppose INLINEFORM0 . As in the proof of Lemma 1.7, we use INLINEFORM1 ",
"where we again use Corollary 1.5 and the fact that INLINEFORM0 from Lemma 1.7. But from the proof of Lemma 1.6, since INLINEFORM1 we know that INLINEFORM2 ",
"and since INLINEFORM0 we have that INLINEFORM1 since INLINEFORM2 and INLINEFORM3 are integral. Thus INLINEFORM4 as desired.",
"Now assume INLINEFORM0 . As in the previous case we obtain INLINEFORM1 ",
"again using Corollary 1.5 and Lemma 1.7. And again using the inequality from the proof of Lemma 1.6 and the fact that INLINEFORM0 we obtain INLINEFORM1 , completing the proof. INLINEFORM2 ",
"Thus we have constructed the desired “flags.\" Indeed, hidden nodes INLINEFORM0 and INLINEFORM1 remain equal to INLINEFORM2 while the currently read input lies in INLINEFORM3 , but one of these nodes becomes positive the moment the currently read input does not lie in this set.",
"However, there are still difficulties. It is possible for INLINEFORM0 or INLINEFORM1 to become positive and later return to INLINEFORM2 . Indeed, running the simple RNN on the word INLINEFORM3 , we compute INLINEFORM4 . However, clearly INLINEFORM5 . Therefore we need to add architecture that retains the information as to whether the hidden nodes INLINEFORM6 or INLINEFORM7 ever become positive, and below we show that hidden nodes INLINEFORM8 and INLINEFORM9 respectively are sufficient.",
"Lemma 1.9. For any input INLINEFORM0 we have INLINEFORM1 INLINEFORM2 ",
"Proof. From the definition of INLINEFORM0 and INLINEFORM1 we have INLINEFORM2 INLINEFORM3 ",
"and since INLINEFORM0 for all INLINEFORM1 (because of the ReLU) we immediately have the result by induction or direct expansion. INLINEFORM2 ",
"We are now ready to combine these lemmas and accomplish our original goal:",
"Theorem 1.10. The INLINEFORM0 -language of the simple RNN described earlier in the section is INLINEFORM1 .",
"Proof. Consider any input INLINEFORM0 into the RNN. For the remainder of the proof, remember that INLINEFORM1 for all INLINEFORM2 because of the ReLU activation. We consider three cases:",
"In this case by Corollary 1.5 we have INLINEFORM0 . Furthermore, by Lemma 1.7 we have INLINEFORM1 . By combining Lemmas 1.7 and 1.9, we have INLINEFORM2 . Thus INLINEFORM3 which, given that INLINEFORM4 , equals INLINEFORM5 precisely when INLINEFORM6 , by the inequality from the proof of Lemma 1.6.",
"In this case we clearly must have INLINEFORM0 for some INLINEFORM1 and thus by Lemma 1.8 we have that either INLINEFORM2 or INLINEFORM3 , so INLINEFORM4 .",
"Suppose INLINEFORM0 is the minimal index such that INLINEFORM1 . Then by minimality INLINEFORM2 so again by Lemma 1.8 we have that either INLINEFORM3 or INLINEFORM4 . But since INLINEFORM5 by Lemma 1.9 this means that either INLINEFORM6 or INLINEFORM7 , so INLINEFORM8 .",
"Thus INLINEFORM0 if and only if INLINEFORM1 , completing the proof of the theorem. INLINEFORM2 ",
"Now recall in the proof of Theorem 1.1 we showed that any regular language INLINEFORM0 was the INLINEFORM1 -language of some simple RNN, and moreover that for any input not in INLINEFORM2 the output of that RNN is positive. This allows us to provide a simple proof of the main theorem of this section:",
"Theorem 1.11. For any context-free language INLINEFORM0 , suppose we relabel and write INLINEFORM1 for some regular language INLINEFORM2 , whose corresponding minimum-size DFA has INLINEFORM3 states. Then there exists a simple RNN with a hidden layer of size INLINEFORM4 whose INLINEFORM5 -language is INLINEFORM6 .",
"Proof. Consider the simple RNN with INLINEFORM0 as its INLINEFORM1 -language described in the proof of Theorem 1.1 and the simple RNN with INLINEFORM2 as its INLINEFORM3 -language constructed to prove Theorem 1.10. Merge the INLINEFORM4 nodes in the input layer corresponding to the input and merge the single output nodes of both RNNs. Stack the two hidden layers, and add no new edges. There were INLINEFORM5 hidden nodes in the first RNN and INLINEFORM6 in the second, so altogether the new RNN has INLINEFORM7 hidden nodes.",
"The output of the new RNN is equal to the summed output of the two original RNNs, and from the proofs of Theorems 1.1 and 1.10 these outputs are always nonnegative. Thus the output of the new RNN is INLINEFORM0 if and only if the outputs of both old RNNs were INLINEFORM1 , immediately proving the theorem. INLINEFORM2 ",
"Discussion 1.12. This result shows that simple RNNs with arbitrary precision are at least as computationally powerful as PDAs."
],
[
"In practice, architectures more complicated than the simple RNNs studied above - notably gated RNNs, including the Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) - perform better on many natural language tasks. Thus we are motivated to explore their computational capabilities. Here we focus on the GRU, described by the equations below: INLINEFORM0 ",
" for some INLINEFORM0 where INLINEFORM1 has dimension INLINEFORM2 and INLINEFORM3 is the sigmoid function and INLINEFORM4 is the hyperbolic tangent function, and the INLINEFORM5 symbol represents element-wise multiplication. Usually the hidden state INLINEFORM6 is initialized to be INLINEFORM7 , but we will ignore that restriction. Some literature switches the placements of the INLINEFORM8 and INLINEFORM9 , but since INLINEFORM10 this is immaterial.",
"We begin this section by again limiting our architecture to use finite precision, and also assume INLINEFORM0 for some INLINEFORM1 . We can prove an analogue of Theorem 1.1:",
"Theorem 2.1. For every language INLINEFORM0 , INLINEFORM1 is regular if and only if INLINEFORM2 is the INLINEFORM3 -language of some finite precision GRU.",
"Proof. The “if\" direction can be shown in the same manner as in Theorem 1.1. So, here we focus on the “only if\" direction. Suppose we have a DFA INLINEFORM0 with corresponding language INLINEFORM1 . We will construct a GRU whose inputs are one-hotted symbols from INLINEFORM2 with INLINEFORM3 hidden nodes whose INLINEFORM4 -language is INLINEFORM5 .",
"For convenience, for all INLINEFORM0 let INLINEFORM1 denote the corresponding one-hot vector for INLINEFORM2 . Furthermore, let INLINEFORM3 .",
"First set INLINEFORM0 and INLINEFORM1 and INLINEFORM2 , so the simplified GRU is given by: INLINEFORM3 ",
" Now, define an arbitrary bijective map INLINEFORM0 . Then construct INLINEFORM1 vectors INLINEFORM2 ",
"where for all INLINEFORM0 and INLINEFORM1 we set INLINEFORM2 ",
"Our goal will be to find INLINEFORM0 and INLINEFORM1 such that if INLINEFORM2 for some INLINEFORM3 , and INLINEFORM4 is the one-hot encoding of some INLINEFORM5 , then INLINEFORM6 where if INLINEFORM7 for some INLINEFORM8 then INLINEFORM9 . If this is possible, then we could set INLINEFORM10 and be able to track the current state of the DFA effectively.",
"The strategy for accomplishing this is essentially to pick a simple INLINEFORM0 , and then solve a system of equations to produce the desired INLINEFORM1 .",
"For convenience, define the natural map INLINEFORM0 where INLINEFORM1 if and only if the INLINEFORM2 th element of INLINEFORM3 is equal to INLINEFORM4 .",
"Let INLINEFORM0 ",
"where INLINEFORM0 ",
"for all INLINEFORM0 and INLINEFORM1 . Now consider the INLINEFORM2 equations INLINEFORM3 ",
"where INLINEFORM0 , for every INLINEFORM1 and INLINEFORM2 . Let INLINEFORM3 ",
" for all INLINEFORM0 and INLINEFORM1 and INLINEFORM2 . Letting INLINEFORM3 ",
"The INLINEFORM0 earlier equations can now be combined as a single matrix equation given by INLINEFORM1 ",
"Now it is easy to see that INLINEFORM0 ",
"where INLINEFORM0 is a INLINEFORM1 matrix for each INLINEFORM2 . In particular, we have that INLINEFORM3 ",
"for each INLINEFORM0 .",
"Using basic row operations it is easy to see that INLINEFORM0 for all INLINEFORM1 , so INLINEFORM2 ",
"and thus INLINEFORM0 is well-defined. Furthermore, since INLINEFORM1 for each INLINEFORM2 , the inputs into all inverse hyperbolic tangents in INLINEFORM3 lie in INLINEFORM4 and so INLINEFORM5 is well-defined as well. Thus our expression for INLINEFORM6 is well-defined.",
"Now, given our choices for the INLINEFORM0 , and INLINEFORM1 , after reading any input INLINEFORM2 , if INLINEFORM3 is the current state of the DFA associated with INLINEFORM4 , then INLINEFORM5 . Now because the INLINEFORM6 are clearly linearly independent, we can find a INLINEFORM7 such that INLINEFORM8 ",
"for all INLINEFORM0 and it's clear that the INLINEFORM1 -language of the resulting GRU will be INLINEFORM2 , as desired. INLINEFORM3 ",
"Discussion 2.2. In the above proof, we are implicitly assuming that the activation functions of the GRU are not actually the sigmoid and hyperbolic tangent functions but rather finite precision analogues for which the equations we solved are all consistent. However, for the remainder of this section we can drop this assumption.",
"If we remove the finite precision restriction, we again wish to prove that Gated RNNs are as powerful as PDAs. To do so, we emulate the approach from Section 1. Immediately we encounter difficulties - in particular, our previous approach relied on maintaining the digits of a state INLINEFORM0 in base INLINEFORM1 very carefully. With outputs now run through sigmoid and hyperbolic tangent functions, this becomes very hard. Furthermore, updating the state INLINEFORM2 occasionally requires multiplication by INLINEFORM3 (when we read a closing parenthesis). But because INLINEFORM4 and INLINEFORM5 for all INLINEFORM6 , this is impossible to do with the GRU architecture.",
"To account for both of these issues, instead of keeping track of the state INLINEFORM0 as we read a word, we will instead keep track of the state INLINEFORM1 of a word INLINEFORM2 defined by INLINEFORM3 ",
" for all INLINEFORM0 , for some predetermined sufficiently large INLINEFORM1 . We have the following relationship between INLINEFORM2 and INLINEFORM3 :",
"Lemma 2.3. For any word INLINEFORM0 we have INLINEFORM1 for all INLINEFORM2 .",
"Proof. Multiplying the recurrence relationship for INLINEFORM0 by INLINEFORM1 we recover the recurrence relationship for INLINEFORM2 in Section 1, implying the desired result. INLINEFORM3 ",
"Thus the state INLINEFORM0 allows us to keep track of the old state INLINEFORM1 without having to multiply by any constant greater than INLINEFORM2 . Furthermore, for large INLINEFORM3 , INLINEFORM4 will be extremely small, allowing us to abuse the fact that INLINEFORM5 for small values of INLINEFORM6 . In terms of the stack of digits interpretation of INLINEFORM7 , INLINEFORM8 is the same except between every pop or push we add INLINEFORM9 zeros to the top of the stack.",
"Again we wish to construct a GRU from whose hidden state we can recover INLINEFORM0 . Ignoring the output layer for now, consider the GRU defined by INLINEFORM1 ",
" where INLINEFORM0 will be determined later, the inputs INLINEFORM1 are again INLINEFORM2 one-hots of the symbols in INLINEFORM3 in the order INLINEFORM4 and the hidden states have dimension INLINEFORM5 where INLINEFORM6 ",
" where INLINEFORM0 is the inverse of the sigmoid function. For sufficiently large INLINEFORM1 , clearly our use of INLINEFORM2 is well-defined. We will show the following invariant:",
"Lemma 2.4. Given an input word INLINEFORM0 , if INLINEFORM1 then we have INLINEFORM2 for all INLINEFORM3 .",
"Proof. As in Section 1, let INLINEFORM0 and INLINEFORM1 and INLINEFORM2 . First, we will show INLINEFORM3 for all INLINEFORM4 by induction on INLINEFORM5 . The base case is trivial, so note INLINEFORM6 ",
" so by induction INLINEFORM0 as desired. Similarly, we obtain INLINEFORM1 for all INLINEFORM2 .",
"Now we restrict our attention to INLINEFORM0 . Note that INLINEFORM1 ",
" and so using the definition of INLINEFORM0 we obtain INLINEFORM1 ",
" If we removed the INLINEFORM0 from the above expression, it would simplify to INLINEFORM1 ",
"which is exactly the recurrence relation satisfied by INLINEFORM0 . Since the expressions inside the hyperbolic tangents are extremely small (on the order of INLINEFORM1 ), this implies that INLINEFORM2 is a good approximation for INLINEFORM3 as desired. This will be formalized in the next lemma. INLINEFORM4 ",
"Lemma 2.5. For any input word INLINEFORM0 , if INLINEFORM1 then we have INLINEFORM2 for all INLINEFORM3 .",
"Proof. Let INLINEFORM0 for all INLINEFORM1 . Then we easily find that INLINEFORM2 ",
"Now define INLINEFORM0 by the recurrence INLINEFORM1 ",
"with INLINEFORM0 . Because INLINEFORM1 for all INLINEFORM2 it is easy to see that INLINEFORM3 for all INLINEFORM4 .",
"Now by a Taylor expansion, INLINEFORM0 , so we have that INLINEFORM1 ",
"for INLINEFORM0 . Thus we obtain the bound INLINEFORM1 ",
"Since INLINEFORM0 and INLINEFORM1 we also have INLINEFORM2 ",
"Similarly we obtain the bound INLINEFORM0 ",
"Since again INLINEFORM0 and INLINEFORM1 we also have INLINEFORM2 ",
"Thus if we define INLINEFORM0 by the recurrence INLINEFORM1 ",
"with INLINEFORM0 , then INLINEFORM1 for all INLINEFORM2 .",
"Now we wish to upper bound INLINEFORM0 . Since INLINEFORM1 is not present in the recurrence for INLINEFORM2 , assume without loss of generality that all parenthesis in an input word INLINEFORM3 lie in INLINEFORM4 . Suppose that INLINEFORM5 was a substring of INLINEFORM6 , so that INLINEFORM7 . Then we would have INLINEFORM8 ",
" However, for the word INLINEFORM0 (which would clearly still lie in INLINEFORM1 ) we would have INLINEFORM2 ",
" which is larger. Thus to upper bound INLINEFORM0 it suffices to consider only words that do not contain the substring INLINEFORM1 , which are words in the form INLINEFORM2 ",
"with INLINEFORM0 open parentheses followed by INLINEFORM1 closing parentheses. Furthermore, adding extra closing parenthesis where suitable clearly increases the final INLINEFORM2 so we can assume INLINEFORM3 . We can then exactly calculate INLINEFORM4 as INLINEFORM5 ",
"Considering each sum separately we have for sufficiently large INLINEFORM0 that INLINEFORM1 ",
" and INLINEFORM0 ",
" And therefore INLINEFORM0 is an upper bound on INLINEFORM1 . Thus INLINEFORM2 ",
"for all INLINEFORM0 as desired. INLINEFORM1 ",
"Corollary 2.6. For any input word INLINEFORM0 , if INLINEFORM1 contains INLINEFORM2 open parentheses and INLINEFORM3 closing parentheses then INLINEFORM4 ",
"with INLINEFORM0 for all INLINEFORM1 .",
"Proof. This follows directly from the computations in the proof of Lemma 2.5 and the recurrence for INLINEFORM0 . INLINEFORM1 ",
"Now, set INLINEFORM0 . We then have the following useful analogues of Lemmas 1.7 and 1.8:",
"Corollary 2.7. For any input word INLINEFORM0 we have INLINEFORM1 .",
"Proof. This follows immediately from Corollary 2.6 and the fact that INLINEFORM0 . INLINEFORM1 ",
"Lemma 2.8. Given a word INLINEFORM0 , consider the unique INLINEFORM1 such that INLINEFORM2 . Then for an input word INLINEFORM3 with INLINEFORM4 , we have INLINEFORM5 .",
"Note that INLINEFORM0 ",
"so multiplying both sides by INLINEFORM0 and using the inequality from the proof of Lemma 2.5 we have INLINEFORM1 ",
"Now by Corollary 2.6 we have that INLINEFORM0 ",
"where we used the inequality from the proof of Lemma 1.6 and the fact that INLINEFORM0 . Therefore INLINEFORM1 ",
"Since INLINEFORM0 we have that INLINEFORM1 and so for sufficiently large INLINEFORM2 we then have INLINEFORM3 ",
"as desired. INLINEFORM0 ",
"With these results in hand, consider the larger GRU, an example of which can be found in Appendix A.2, defined by INLINEFORM0 ",
" where the inputs INLINEFORM0 are again INLINEFORM1 one-hots of the symbols in INLINEFORM2 in the order INLINEFORM3 and the hidden states have dimension INLINEFORM4 where INLINEFORM5 ",
" As before, with respect to a word INLINEFORM0 define INLINEFORM1 by INLINEFORM2 ",
" for all INLINEFORM0 and all INLINEFORM1 . Similarly define INLINEFORM2 by INLINEFORM3 ",
" For our new GRU, let INLINEFORM0 . We then have the following results:",
"Lemma 2.9. For any input word INLINEFORM0 we have INLINEFORM1 .",
"Proof. This follows immediately from the proof of Lemma 2.4. INLINEFORM0 ",
"Lemma 2.10. For any input word INLINEFORM0 , if INLINEFORM1 contains INLINEFORM2 open parentheses and INLINEFORM3 closing parenthesis then INLINEFORM4 INLINEFORM5 ",
"with INLINEFORM0 for all INLINEFORM1 .",
"Proof. This follows immediately from the proof of Corollary 2.6 and the new INLINEFORM0 , since INLINEFORM1 behaves exactly like INLINEFORM2 if each input INLINEFORM3 or INLINEFORM4 were INLINEFORM5 or INLINEFORM6 respectively, instead. INLINEFORM7 ",
"Lemma 2.11. For any input word INLINEFORM0 we have INLINEFORM1 and INLINEFORM2 if and only if INLINEFORM3 .",
"Proof. From our chosen INLINEFORM0 we see that INLINEFORM1 INLINEFORM2 ",
"Since INLINEFORM0 and since the fourth and eighth rows of INLINEFORM1 are identically INLINEFORM2 , the equation INLINEFORM3 ",
"implies that INLINEFORM0 INLINEFORM1 ",
"which immediately implies that INLINEFORM0 . Now, suppose INLINEFORM1 . Then from Corollary 2.7 and its analogue for INLINEFORM2 we see that INLINEFORM3 for all INLINEFORM4 , so INLINEFORM5 as desired.",
"Otherwise, there exists some minimal INLINEFORM0 such that INLINEFORM1 . Then INLINEFORM2 for some INLINEFORM3 . Consider the unique INLINEFORM4 such that INLINEFORM5 . If INLINEFORM6 then from the proof of Lemma 2.8 we have that INLINEFORM7 and so INLINEFORM8 . Since INLINEFORM9 this means that INLINEFORM10 . If INLINEFORM11 then from the analogue of the proof of Lemma 2.8 for INLINEFORM12 , we obtain INLINEFORM13 . This completes the proof. INLINEFORM14 ",
"We are now ready to combine these lemmas to prove an important result, the analogue of Theorem 1.10 for GRUs:",
"Theorem 2.12. The INLINEFORM0 -language of the GRU described earlier in the section is INLINEFORM1 .",
"Proof. Consider any input word INLINEFORM0 into the GRU. We consider four cases:",
"In this case, we clearly have INLINEFORM0 and INLINEFORM1 from the proof of Corollary 2.7, so by Lemmas 2.9 and 2.10 we have that INLINEFORM2 ",
"with INLINEFORM0 . Furthermore from Lemma 2.11 we have that INLINEFORM1 so since INLINEFORM2 we must have INLINEFORM3 ",
"for sufficiently large INLINEFORM0 , as desired.",
"As in Case 1 we have that INLINEFORM0 and so by Lemmas 2.9 and 2.10 we have that INLINEFORM1 ",
"with INLINEFORM0 . Furthermore from Lemma 2.11 we have that INLINEFORM1 so here INLINEFORM2 ",
"for sufficiently large INLINEFORM0 , since the minimum value of INLINEFORM1 is clearly INLINEFORM2 .",
"Suppose INLINEFORM0 for some unique INLINEFORM1 . If INLINEFORM2 for some INLINEFORM3 then from Lemmas 2.9 and 2.10 and the proof of Lemma 2.8 we obtain INLINEFORM4 ",
"for sufficiently large INLINEFORM0 . If instead INLINEFORM1 then the same technique with the inequality INLINEFORM2 can be used to show INLINEFORM3 ",
"if INLINEFORM0 is sufficiently large. As before using Lemma 2.11 we have that INLINEFORM1 and combining these bounds we find that INLINEFORM2 ",
"In this case we know that INLINEFORM0 by Lemma 2.9, so we have INLINEFORM1 ",
"and by Lemma 2.11 we know that INLINEFORM0 so INLINEFORM1 ",
"Thus INLINEFORM0 if INLINEFORM1 and INLINEFORM2 otherwise, as desired. INLINEFORM3 ",
"We may now proceed to show the main theorem of this section, an analogue of Theorem 1.11 for GRUs:",
"Theorem 2.13. For any context-free language INLINEFORM0 suppose we relabel and write INLINEFORM1 for some regular language INLINEFORM2 , whose corresponding minimum DFA has INLINEFORM3 states. Then there exists a GRU with a hidden layer of size INLINEFORM4 whose INLINEFORM5 -language is INLINEFORM6 .",
"Proof. This follows by combining the GRUs from the proofs of Theorems 2.1 and 2.12, as we did for simple RNNs in the proof of Theorem 1.11. INLINEFORM0 ",
"Discussion 2.14. A critical idea in this section was to use the fact that INLINEFORM0 near INLINEFORM1 , and in fact this idea can be used for any activation function with a well-behaved Taylor series expansion around INLINEFORM2 .",
"Discussion 2.15. We “cheated\" a little bit by allowing INLINEFORM0 edge weights and by having INLINEFORM1 where INLINEFORM2 wasn't quite linear. However, INLINEFORM3 edge weights make sense in the context of allowing infinite precision, and simple nonlinear functions over the hidden nodes are often used in practice, like the common softmax activation function."
],
[
"We recognize two main avenues for further research. The first is to remove the necessity for infinite edge weights in the proof of Theorem 2.13, and the second is to extend the results of Theorems 1.11 and 2.13 to Turing recognizable languages.",
"In the proof of Lemma 2.11, edge weights of INLINEFORM0 are necessary for determining whether a hidden node ever becomes negative. Merely using large but finite weights does not suffice, because the values in the hidden state that they will be multiplied with are rapidly decreasing. Their product will vanish, and thus we would not be able to utilize the squashing properties of common activation functions as we did in the proof of Lemma 2.11. Currently we believe that it is possible to prove that GRUs are as computationally powerful as PDAs without using infinite edge weights, but are unaware of a method to do so.",
"Because to the our knowledge there is no analogue of the Chomsky-Sch INLINEFORM0 tzenberger Theorem for Turing recognizable languages, it seems difficult to directly extend our methods to prove that recurrent architectures are as computationally powerful as Turing machines. However, just as PDAs can lazily be described as a DFA with an associated stack, it is well-known that Turing machines are equally as powerful as DFAs with associated queues, which can be simulated with two stacks. Such an approach using two counters was used in proofs in [6], [8] to establish that RNNs with arbitrary precision can emulate Turing machines. We believe that an approach related to this fact could ultimately prove successful, but it would be more useful if set up as in the proofs above in a way that is faithful to the architecture of the neural networks. Counter automata of this sort are also quite unlike the usual implementations found for context-free languages or their extensions for natural languages. Work described in [10] demonstrates that in practice, LSTMs cannot really generalize to recognize the Dyck language INLINEFORM1 . It remains to investigate whether any recent neural network variation does in fact readily generalize outside its training set to “out of sample” examples. This would be an additional topic for future research."
],
[
"Consider the RNN described in the proof of Theorem 1.10 for INLINEFORM0 . We will show the evolution of its hidden state as it reads various inputs:",
"For this example we obtain INLINEFORM0 ",
"For this example we obtain INLINEFORM0 ",
"For this example we obtain INLINEFORM0 "
],
[
"Consider the GRU described in the proof of Theorem 2.12 for INLINEFORM0 and INLINEFORM1 . We will show the evolution of its hidden state as it reads various inputs:",
"For this example we obtain INLINEFORM0 ",
"For this example we obtain INLINEFORM0 ",
"For this example we obtain INLINEFORM0 "
]
],
"section_name": [
"Introduction",
"Simple RNNs",
"Gated RNNs",
"Suggestions for Further Research",
"A.1. Simple RNN D 2 \\displaystyle D_2 Examples",
"A.2. GRU D 2 \\displaystyle D_2 Examples"
]
} | {
"answers": [
{
"annotation_id": [
"c959da6b414a190d6beec3abf48569845faeb560",
"f2c281c248c14a9252a48fc0b8c5b38eb3753274"
],
"answer": [
{
"evidence": [
"Theorem 1.11. For any context-free language INLINEFORM0 , suppose we relabel and write INLINEFORM1 for some regular language INLINEFORM2 , whose corresponding minimum-size DFA has INLINEFORM3 states. Then there exists a simple RNN with a hidden layer of size INLINEFORM4 whose INLINEFORM5 -language is INLINEFORM6 .",
"Proof. Consider the simple RNN with INLINEFORM0 as its INLINEFORM1 -language described in the proof of Theorem 1.1 and the simple RNN with INLINEFORM2 as its INLINEFORM3 -language constructed to prove Theorem 1.10. Merge the INLINEFORM4 nodes in the input layer corresponding to the input and merge the single output nodes of both RNNs. Stack the two hidden layers, and add no new edges. There were INLINEFORM5 hidden nodes in the first RNN and INLINEFORM6 in the second, so altogether the new RNN has INLINEFORM7 hidden nodes.",
"The output of the new RNN is equal to the summed output of the two original RNNs, and from the proofs of Theorems 1.1 and 1.10 these outputs are always nonnegative. Thus the output of the new RNN is INLINEFORM0 if and only if the outputs of both old RNNs were INLINEFORM1 , immediately proving the theorem. INLINEFORM2",
"Discussion 1.12. This result shows that simple RNNs with arbitrary precision are at least as computationally powerful as PDAs."
],
"extractive_spans": [
"Theorem 1.11. For any context-free language INLINEFORM0 , suppose we relabel and write INLINEFORM1 for some regular language INLINEFORM2 , whose corresponding minimum-size DFA has INLINEFORM3 states. Then there exists a simple RNN with a hidden layer of size INLINEFORM4 whose INLINEFORM5 -language is INLINEFORM6 ."
],
"free_form_answer": "",
"highlighted_evidence": [
"Theorem 1.11. For any context-free language INLINEFORM0 , suppose we relabel and write INLINEFORM1 for some regular language INLINEFORM2 , whose corresponding minimum-size DFA has INLINEFORM3 states. Then there exists a simple RNN with a hidden layer of size INLINEFORM4 whose INLINEFORM5 -language is INLINEFORM6 .\n\nProof. Consider the simple RNN with INLINEFORM0 as its INLINEFORM1 -language described in the proof of Theorem 1.1 and the simple RNN with INLINEFORM2 as its INLINEFORM3 -language constructed to prove Theorem 1.10. Merge the INLINEFORM4 nodes in the input layer corresponding to the input and merge the single output nodes of both RNNs. Stack the two hidden layers, and add no new edges. There were INLINEFORM5 hidden nodes in the first RNN and INLINEFORM6 in the second, so altogether the new RNN has INLINEFORM7 hidden nodes.\n\nThe output of the new RNN is equal to the summed output of the two original RNNs, and from the proofs of Theorems 1.1 and 1.10 these outputs are always nonnegative. Thus the output of the new RNN is INLINEFORM0 if and only if the outputs of both old RNNs were INLINEFORM1 , immediately proving the theorem. INLINEFORM2\n\nDiscussion 1.12. This result shows that simple RNNs with arbitrary precision are at least as computationally powerful as PDAs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Theorem 1.11. For any context-free language INLINEFORM0 , suppose we relabel and write INLINEFORM1 for some regular language INLINEFORM2 , whose corresponding minimum-size DFA has INLINEFORM3 states. Then there exists a simple RNN with a hidden layer of size INLINEFORM4 whose INLINEFORM5 -language is INLINEFORM6 .",
"Proof. Consider the simple RNN with INLINEFORM0 as its INLINEFORM1 -language described in the proof of Theorem 1.1 and the simple RNN with INLINEFORM2 as its INLINEFORM3 -language constructed to prove Theorem 1.10. Merge the INLINEFORM4 nodes in the input layer corresponding to the input and merge the single output nodes of both RNNs. Stack the two hidden layers, and add no new edges. There were INLINEFORM5 hidden nodes in the first RNN and INLINEFORM6 in the second, so altogether the new RNN has INLINEFORM7 hidden nodes.",
"The output of the new RNN is equal to the summed output of the two original RNNs, and from the proofs of Theorems 1.1 and 1.10 these outputs are always nonnegative. Thus the output of the new RNN is INLINEFORM0 if and only if the outputs of both old RNNs were INLINEFORM1 , immediately proving the theorem. INLINEFORM2",
"Discussion 1.12. This result shows that simple RNNs with arbitrary precision are at least as computationally powerful as PDAs."
],
"extractive_spans": [],
"free_form_answer": "They prove that for any context-free language L\nthere exists an RNN whose {0}-language is L.",
"highlighted_evidence": [
"Theorem 1.11. For any context-free language INLINEFORM0 , suppose we relabel and write INLINEFORM1 for some regular language INLINEFORM2 , whose corresponding minimum-size DFA has INLINEFORM3 states. Then there exists a simple RNN with a hidden layer of size INLINEFORM4 whose INLINEFORM5 -language is INLINEFORM6 .\n\nProof. Consider the simple RNN with INLINEFORM0 as its INLINEFORM1 -language described in the proof of Theorem 1.1 and the simple RNN with INLINEFORM2 as its INLINEFORM3 -language constructed to prove Theorem 1.10. Merge the INLINEFORM4 nodes in the input layer corresponding to the input and merge the single output nodes of both RNNs. Stack the two hidden layers, and add no new edges. There were INLINEFORM5 hidden nodes in the first RNN and INLINEFORM6 in the second, so altogether the new RNN has INLINEFORM7 hidden nodes.\n\nThe output of the new RNN is equal to the summed output of the two original RNNs, and from the proofs of Theorems 1.1 and 1.10 these outputs are always nonnegative. Thus the output of the new RNN is INLINEFORM0 if and only if the outputs of both old RNNs were INLINEFORM1 , immediately proving the theorem. INLINEFORM2\n\nDiscussion 1.12. This result shows that simple RNNs with arbitrary precision are at least as computationally powerful as PDAs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1265b629ad3b305965ca12e5b0286e7151561c0a",
"132b9983a959e0db59f8c14f697e81eb23bf4760"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"How do they prove that RNNs with arbitrary precision are as powerful as a pushdown automata?",
"What are edge weights?"
],
"question_id": [
"6e3e9818551fc2f8450bbf09b0fe82ac2506bc7a",
"0b5a505c1fca92258b9e83f53bb8cfeb81cb655a"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [],
"file": []
} | [
"How do they prove that RNNs with arbitrary precision are as powerful as a pushdown automata?"
] | [
[
"1906.06349-Simple RNNs-77",
"1906.06349-Simple RNNs-74",
"1906.06349-Simple RNNs-75"
]
] | [
"They prove that for any context-free language L\nthere exists an RNN whose {0}-language is L."
] | 321 |
1908.06493 | TwistBytes -- Hierarchical Classification at GermEval 2019: walking the fine line (of recall and precision) | We present here our approach to the GermEval 2019 Task 1 - Shared Task on hierarchical classification of German blurbs. We achieved first place in the hierarchical subtask B and second place on the root node, flat classification subtask A. In subtask A, we applied a simple multi-feature TF-IDF extraction method using different n-gram range and stopword removal, on each feature extraction module. The classifier on top was a standard linear SVM. For the hierarchical classification, we used a local approach, which was more light-weighted but was similar to the one used in subtask A. The key point of our approach was the application of a post-processing to cope with the multi-label aspect of the task, increasing the recall but not surpassing the precision measure score. | {
"paragraphs": [
[
"Hierarchical Multi-label Classification (HMC) is an important task in Natural Language Processing (NLP). Several NLP problems can be formulated in this way, such as patent, news articles, books and movie genres classification (as well as many other classification tasks like diseases, gene function prediction). Also, many tasks can be formulated as hierarchical problem in order to cope with a large amount of labels to assign to the sample, in a divide and conquer manner (with pseudo meta-labels). A theoretical survey exists BIBREF0 discussing on how the task can be engaged, several approaches and the prediction quality measures. Basically, the task in HMC is to assign a sample to one or many nodes of a Directed Acyclic Graph (DAG) (in special cases a tree) based on features extracted from the sample. In the case of possible multiple parent, the evaluation of the prediction complicates heavily, for once since several paths can be taken, but only in a joining node must be considered.",
"The GermEval 2019 Task 1 - Shared Task on hierarchical classification of German blurbs focus on the concrete challenge of classifying short descriptive texts of books into the root nodes (subtask A) or into the entire hierarchy (subtask B). The hierarchy can be described as a tree and consisted of 343 nodes, in which there are 8 root nodes. With about 21k samples it was not clear if deep learning methods or traditional NLP methods would perform better. Especially, in the subtask A, since for subtask B some classes had only a few examples. Although an ensemble of traditional and deep learning methods could profit in this area, it is difficult to design good heterogeneous ensembles.",
"Our approach was a traditional NLP one, since we employed them successfully in several projects BIBREF1, BIBREF2, BIBREF3, with even more samples and larger hierarchies. We compared also new libraries and our own implementation, but focused on the post-processing of the multi-labels, since this aspect seemed to be the most promising improvement to our matured toolkit for this task. This means but also, to push recall up and hope to not overshot much over precision."
],
[
"The dataset released by BIBREF4 enabled a major boost in HMC on text. This was a seminating dataset since not only was very large (800k documents) but the hierarchies were large (103 and 364). Many different versions were used in thousands of papers. Further, the label density BIBREF5 was considerably high allowing also to be treated as multi-label, but not too high as to be disregarded as a common real-world task. Some other datasets were also proposed (BIBREF6, BIBREF7), which were far more difficult to classify. This means consequently that a larger mature and varied collection of methods were developed, from which we cannot cover much in this paper.",
"An overview of hierarchical classification was given in BIBREF0 covering many aspects of the challenge. Especially, there are local approaches which focus on only part of the hierarchy when classifying in contrast to the global (big bang) approaches.",
"A difficult to answer question is about which hierarchical quality prediction measure to use since there are dozens of. An overview with a specific problem is given in BIBREF8. An approach which was usually taken was to select several measures, and use a vote, although many measures inspect the same aspect and therefore correlate, creating a bias. The GermEval competition did not take that into account and concentrates only on the flat micro F-1 measures.",
"Still, a less considered problem in HMC is the number of predicted labels, especially regarding the post-processing of the predictions. We discussed this thoroughly in BIBREF1. The main two promising approaches were proposed by BIBREF9 and BIBREF10. The former focuses on column and row based methods for estimating the appropriate threshold to convert a prediction confidence into a label prediction. BIBREF10 used the label cardinality (BIBREF5), which is the mean average label per sample, of the training set and change the threshold globally so that the test set achieved similar label cardinality."
],
[
"The shared task aimed at Hierarchical Multi-label Classification (HMC) of Blurbs. Blurbs are short texts consisting of some German sentences. Therefore, a standard framework of word vectorization can be applied. There were 14548 training, 2079 development, and 4157 test samples.",
"The hierarchy can be considered as an ontology, but for the sake of simplicity, we regard it as a simple tree, each child node having only on single parent node, with 4 levels of depth, 343 labels from which 8 are root nodes, namely: 'Literatur & Unterhaltung', 'Ratgeber', 'Kinderbuch & Jugendbuch', 'Sachbuch', 'Ganzheitliches Bewusstsein', 'Glaube & Ethik', and 'Künste, Architektur & Garten'.",
"The label cardinality of the training dataset was about 1.070 (train: 1.069, dev: 1.072) in the root nodes, pointing to a clearly low multi-label problem, although there were samples with up to 4 root nodes assigned. This means that the traditional machine learning systems would promote single label predictions. Subtask B has a label cardinality of 3.107 (train: 3.106, dev: 3.114), with 1 up to 14 labels assigned per sample. Table TABREF4 shows a short dataset summary by task."
],
[
"We used two different approaches for each subtask. In subtask A, we used a heavier feature extraction method and a linear Support-Vector-Machine (SVM) whereas for subtask B we used a more light-weighted feature extraction with same SVM but in a local hierarchical classification fashion, i.e. for each parent node such a base classifier was used. We describe in the following the approaches in detail. They were designed to be light and fast, to work almost out of the box, and to easily generalise."
],
[
"For subtask A, we use the one depicted in Fig. FIGREF8, for subtask B, a similar more light-weight approach was employed as base classifier (described later). As can be seen, several vectorizer based on different n-grams (word and character) with a maximum of 100k features and preprocessing, such as using or not stopwords, were applied to the blurbs. The term frequency obtained were then weighted with inverse document frequency (TF-IDF). The results of five different feature extraction and weighting modules were given as input for a vanilla SVM classifier (parameter C=1.5) which was trained in a one-versus-all fashion."
],
[
"We use a local parent node strategy, which means the parent node decides which of its children will be assigned to the sample. This creates also the necessity of a virtual root node. For each node the same base classifier is trained independently of the other nodes. We also adapt each feature extraction with the classifier in each single node much like BIBREF11. As base classifier, a similar one to Fig. FIGREF8 was used, where only one 1-7 word n-gram, one 1-3 word n-gram with German stopwords removal and one char 2-3 n-gram feature extraction were employed, all with maximum 70k features. We used two implementations achieving very similar results. In the following, we give a description of both approaches."
],
[
"Our implementation is light-weighted and optimized for a short pipeline, however for large amount of data, saving each local parent node model to the disk. However, it does not conforms the way scikit-learn is designed. Further, in contrast to the Scikit Learn Hierarchical, we give the possibility to optimize with a grid search each feature extraction and classifier per node. This can be quite time consuming, but can also be heavily parallelized. In the final phase of the competition, we did not employ it because of time constrains and the amount of experiments performed in the Experiments Section was only possible with a light-weighted implementation."
],
[
"Scikit Learn Hierarchical (Hsklearn) was forked and improved to deal with multi-labels for the task, especially, allowing each node to perform its own preprocessing. This guaranteed that the performance of our own implementation was surpassed and that a contribution for the community was made. This ensured as well that the results are easily reproducible."
],
[
"Many classifiers can predict a score or confidence about the prediction. Turning this score into the prediction is usually performed by setting a threshold, such as 0 and 0.5, so labels which have a score assigned greater than that are assigned to the sample. This might be not the optimal threshold in the multi-label classification setup and there are many approaches to set it (BIBREF9). Although these methods concentrate in the sample or label, we have had good results with a much more general approach.",
"As described in BIBREF1, Read and Pfahringer BIBREF10 introduce a method (referred hereinafter to as LCA) to estimate the threshold globally. Their method chooses the threshold that minimizes the difference between the label cardinality of the training set and the predicted set.",
"where $LCard(D_T)$ denotes the label cardinality of training set and $LCard(H_t(D_S))$ the label cardinality of the predictions on test set if $t$ was applied as the threshold. For that the predictions need to be normalized to unity. We also tested this method not for the label cardinality over all samples and labels but only labelwise. In our implementation, the scores of the SVM were not normalized, which produced slightly different results from a normalized approach.",
"For the HMC subtask B, we used a simple threshold based on the results obtained for LCA. Especially, using multiple models per node could cause a different scaling."
],
[
"We also experimented with other different approaches. The results of the first two were left out (they did not perform better), for the sake of conciseness.",
"Meta Crossvalidation Classifier: BIBREF3",
"Semi-Supervised Learning: BIBREF12, BIBREF3",
"Flair: Flair BIBREF13 with different embeddings (BERT (out of memory), Flair embeddings (forward and backward German)). Such sophisticated language models require much more computational power and many examples per label. This was the case for the subtask A but subtask B was not."
],
[
"We divide this Section in two parts, in first we conduct experiments on the development set and in the second on the test set, discussing there the competition results."
],
[
"The experiments with alternative approaches, such as Flair, meta-classifier and semi-supervised learning yielded discouraging results, so we will concentrate in the SVM-TF-IDF methods. Especially, semi-supervised proved in other setups very valuable, here it worsened the prediction quality, so we could assume the same \"distribution\" of samples were in the training and development set (and so we concluded in the test set).",
"In Table TABREF25, the results of various steps towards the final model can be seen. An SVM-TF-IDF model with word unigram already performed very well. Adding more n-grams did not improve, on the contrary using n-grams 1-7 decreased the performance. Only when removing stopwords it improved again, but then substantially. Nonetheless, a character 2-3 n-gram performed best between these simple models. This is interesting, since this points much more to not which words were used, but more on the phonetics.",
"Using the ensemble feature model produced the best results without post-processing. The simple use of a low threshold yielded also astonishingly good results. This indicates that the SVM's score production was very good, yet the threshold 0 was too cautious.",
"In Fig. FIGREF26, a graph showing the dependency between the threshold set and the micro F-1 score achieved in the development set is depicted. The curve fitted was $a*x^2+b*x+c$ which has the maximum at approx. -0.2. We chose -0.25 in the expectation that the test set would not be exactly as the development set and based on our previous experience with other multi-label datasets (such as the RCv1-v2) which have an optimal threshold at -0.3. Also as we will see, the results proved us right achieving the best recall, yet not surpassing the precision score. This is a crucial aspect of the F-1 measure, as it is the harmonic mean it will push stronger and not linearly the result towards the lower end, so if decreasing the threshold, increases the recall linearly and decreases also the precision linearly, balancing both will consequently yield a better F-1 score.",
"Although in Fig. FIGREF26, the curve fitted is parabolic, in the interval between -0.2 and 0, the score is almost linear (and strongly monotone decreasing) giving a good indication that at least -0.2 should be a good threshold to produce a higher F-1 score without any loss.",
"Even with such a low threshold as -0.25, there were samples without any prediction. We did not assign any labels to them, as such post-process could be hurtful in the test set, although in the development it yielded the best result (fixing null).",
"In Table TABREF27, the results of the one-vs-all approach regarding the true negative, false positives, false negatives and true positives for the different threshold 0, -0.25 and LCA are shown. Applying other threshold than 0 caused the number of true positives to increase without much hurting the number of true negatives. In fact, the number of false positives and false negatives became much more similar for -0.25 and LCA than for 0. This results in the score of recall and precision being also similar, in a way that the micro F-1 is increased. Also, the threshold -0.25 resulted that the number of false positive is greater than the number of false negatives, than for example -0.2. LCA produced similar results, but was more conservative having a lower false positive and higher true negative and false negative score.",
"We also noticed that the results produced by subtask A were better than that of subtask B for the root nodes, so that a possible crossover between the methods (flat and hierarchical) would be better, however we did not have the time to implement it. Although having a heavier feature extraction for the root nodes could also perform similar (and decreasing complexity for lower nodes). We use a more simple model for the subtask B so that it would be more difficult to overfit.",
"Table TABREF28 shows the comparison of the different examined approaches in subtask B in the preliminary phase. Both implementations, Hsklearn and our own produced very similar results, so for the sake of reproducibility, we chose to continue with Hsklearn. We can see here, in contrary to the subtask A, that -0.25 achieved for one configuration better results, indicating that -0.2 could be overfitted on subtask A and a value diverging from that could also perform better. The extended approach means that an extra feature extraction module was added (having 3 instead of only 2) with n-gram 1-2 and stopwords removal. The LCA approach yielded here a worse score in the normalized but almost comparable in the non-normalized. However, the simple threshold approach performed better and therefore more promising."
],
[
"In Table TABREF30, the best results by team regarding micro F-1 are shown. Our approach reached second place. The difference between the first four places were mostly 0.005 between each, showing that only a minimal change could lead to a place switching. Also depicted are not null improvements results, i.e. in a following post-processing, starting from the predictions, the highest score label is predicted for each sample, even though the score was too low. It is worth-noting that the all but our approaches had much higher precision compared to the achieved recall.",
"Despite the very high the scores, it will be difficult to achieve even higher scores with simple NLP scores. Especially, the n-gram TF-IDF with SVM could not resolve descriptions which are science fiction, but are written as non-fiction book, where context over multiple sentences and word groups are important for the prediction."
],
[
"The best results by team of subtask B are depicted in Table TABREF33. We achieved the highest micro F-1 score and the highest recall. Setting the threshold so low was still too high for this subtask, so precision was still much higher than recall, even in our approach. We used many parameters from subtask A, such as C parameter of SVM and threshold. However, the problem is much more complicated and a grid search over the nodes did not complete in time, so many parameters were not optimised. Moreover, although it is paramount to predict the parent nodes right, so that a false prediction path is not chosen, and so causing a domino effect, we did not use all parameters of the classifier of subtask A, despite the fact it could yield better results. It could as well have not generalized so good.",
"The threshold set to -0.25 shown also to produce better results with micro F-1, in contrast to the simple average between recall and precision. This can be seen also by checking the average value between recall and precision, by checking the sum, our approach produced 0.7072+0.6487 = 1.3559 whereas the second team had 0.7377+0.6174 = 1.3551, so the harmonic mean gave us a more comfortable winning marge."
],
[
"We achieved first place in the most difficult setting of the shared Task, and second on the \"easier\" subtask. We achieved the highest recall and this score was still lower as our achieved precision (indicating a good balance). We could reuse much of the work performed in other projects building a solid feature extraction and classification pipeline. We demonstrated the need for post-processing measures and how the traditional methods performed against new methods with this problem. Further, we improve a hierarchical classification open source library to be easily used in the multi-label setup achieving state-of-the-art performance with a simple implementation.",
"The high scoring of such traditional and light-weighted methods is an indication that this dataset has not enough amount of data to use deep learning methods. Nonetheless, the amount of such datasets will probably increase, enabling more deep learning methods to perform better.",
"Many small improvements were not performed, such as elimination of empty predictions and using label names as features. This will be performed in future work."
],
[
"We thank Mark Cieliebak and Pius von Däniken for the fruitful discussions. We also thank the organizers of the GermEval 2019 Task 1."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data and Methodology ::: Task Definition and Data Description",
"Data and Methodology ::: System Definition",
"Data and Methodology ::: System Definition ::: Classifiers ::: Base Classifier",
"Data and Methodology ::: System Definition ::: Hierarchical Classifier",
"Data and Methodology ::: System Definition ::: Hierarchical Classifier ::: Recursive Grid Search Parent Node",
"Data and Methodology ::: System Definition ::: Hierarchical Classifier ::: Scikit Learn Hierarchical",
"Data and Methodology ::: System Definition ::: Post-processing: Threshold",
"Data and Methodology ::: Alternative approaches",
"Experiments",
"Experiments ::: Preliminary Experiments on Development Set",
"Experiments ::: Subtask A",
"Experiments ::: Subtask B",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"30defd2d78612f09f4ed973de3af6e76c1bca89a",
"db369150334149360da715f67bac3c490db9cc60"
],
"answer": [
{
"evidence": [
"Many classifiers can predict a score or confidence about the prediction. Turning this score into the prediction is usually performed by setting a threshold, such as 0 and 0.5, so labels which have a score assigned greater than that are assigned to the sample. This might be not the optimal threshold in the multi-label classification setup and there are many approaches to set it (BIBREF9). Although these methods concentrate in the sample or label, we have had good results with a much more general approach.",
"As described in BIBREF1, Read and Pfahringer BIBREF10 introduce a method (referred hereinafter to as LCA) to estimate the threshold globally. Their method chooses the threshold that minimizes the difference between the label cardinality of the training set and the predicted set.",
"For the HMC subtask B, we used a simple threshold based on the results obtained for LCA. Especially, using multiple models per node could cause a different scaling.",
"Table TABREF28 shows the comparison of the different examined approaches in subtask B in the preliminary phase. Both implementations, Hsklearn and our own produced very similar results, so for the sake of reproducibility, we chose to continue with Hsklearn. We can see here, in contrary to the subtask A, that -0.25 achieved for one configuration better results, indicating that -0.2 could be overfitted on subtask A and a value diverging from that could also perform better. The extended approach means that an extra feature extraction module was added (having 3 instead of only 2) with n-gram 1-2 and stopwords removal. The LCA approach yielded here a worse score in the normalized but almost comparable in the non-normalized. However, the simple threshold approach performed better and therefore more promising."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Many classifiers can predict a score or confidence about the prediction. Turning this score into the prediction is usually performed by setting a threshold, such as 0 and 0.5, so labels which have a score assigned greater than that are assigned to the sample. ",
"As described in BIBREF1, Read and Pfahringer BIBREF10 introduce a method (referred hereinafter to as LCA) to estimate the threshold globally. Their method chooses the threshold that minimizes the difference between the label cardinality of the training set and the predicted set.",
"For the HMC subtask B, we used a simple threshold based on the results obtained for LCA. Especially, using multiple models per node could cause a different scaling.",
"Table TABREF28 shows the comparison of the different examined approaches in subtask B in the preliminary phase. Both implementations, Hsklearn and our own produced very similar results, so for the sake of reproducibility, we chose to continue with Hsklearn. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The best results by team of subtask B are depicted in Table TABREF33. We achieved the highest micro F-1 score and the highest recall. Setting the threshold so low was still too high for this subtask, so precision was still much higher than recall, even in our approach. We used many parameters from subtask A, such as C parameter of SVM and threshold. However, the problem is much more complicated and a grid search over the nodes did not complete in time, so many parameters were not optimised. Moreover, although it is paramount to predict the parent nodes right, so that a false prediction path is not chosen, and so causing a domino effect, we did not use all parameters of the classifier of subtask A, despite the fact it could yield better results. It could as well have not generalized so good."
],
"extractive_spans": [],
"free_form_answer": "With post-processing",
"highlighted_evidence": [
"The best results by team of subtask B are depicted in Table TABREF33. We achieved the highest micro F-1 score and the highest recall. Setting the threshold so low was still too high for this subtask, so precision was still much higher than recall, even in our approach. We used many parameters from subtask A, such as C parameter of SVM and threshold. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"126a3f686b936860db1f5fb851883dc67f0f9270",
"d9f133271fca96aff890dac790c892e95c61d209"
],
"answer": [
{
"evidence": [
"Many classifiers can predict a score or confidence about the prediction. Turning this score into the prediction is usually performed by setting a threshold, such as 0 and 0.5, so labels which have a score assigned greater than that are assigned to the sample. This might be not the optimal threshold in the multi-label classification setup and there are many approaches to set it (BIBREF9). Although these methods concentrate in the sample or label, we have had good results with a much more general approach.",
"As described in BIBREF1, Read and Pfahringer BIBREF10 introduce a method (referred hereinafter to as LCA) to estimate the threshold globally. Their method chooses the threshold that minimizes the difference between the label cardinality of the training set and the predicted set.",
"where $LCard(D_T)$ denotes the label cardinality of training set and $LCard(H_t(D_S))$ the label cardinality of the predictions on test set if $t$ was applied as the threshold. For that the predictions need to be normalized to unity. We also tested this method not for the label cardinality over all samples and labels but only labelwise. In our implementation, the scores of the SVM were not normalized, which produced slightly different results from a normalized approach.",
"For the HMC subtask B, we used a simple threshold based on the results obtained for LCA. Especially, using multiple models per node could cause a different scaling."
],
"extractive_spans": [],
"free_form_answer": "Set treshold for prediction.",
"highlighted_evidence": [
"Many classifiers can predict a score or confidence about the prediction. Turning this score into the prediction is usually performed by setting a threshold, such as 0 and 0.5, so labels which have a score assigned greater than that are assigned to the sample. This might be not the optimal threshold in the multi-label classification setup and there are many approaches to set it (BIBREF9). Although these methods concentrate in the sample or label, we have had good results with a much more general approach.\n\nAs described in BIBREF1, Read and Pfahringer BIBREF10 introduce a method (referred hereinafter to as LCA) to estimate the threshold globally. Their method chooses the threshold that minimizes the difference between the label cardinality of the training set and the predicted set.",
" We also tested this method not for the label cardinality over all samples and labels but only labelwise. ",
"For the HMC subtask B, we used a simple threshold based on the results obtained for LCA. Especially, using multiple models per node could cause a different scaling."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Many classifiers can predict a score or confidence about the prediction. Turning this score into the prediction is usually performed by setting a threshold, such as 0 and 0.5, so labels which have a score assigned greater than that are assigned to the sample. This might be not the optimal threshold in the multi-label classification setup and there are many approaches to set it (BIBREF9). Although these methods concentrate in the sample or label, we have had good results with a much more general approach."
],
"extractive_spans": [
"Turning this score into the prediction is usually performed by setting a threshold, such as 0 and 0.5, so labels which have a score assigned greater than that are assigned to the sample"
],
"free_form_answer": "",
"highlighted_evidence": [
"Many classifiers can predict a score or confidence about the prediction. Turning this score into the prediction is usually performed by setting a threshold, such as 0 and 0.5, so labels which have a score assigned greater than that are assigned to the sample. This might be not the optimal threshold in the multi-label classification setup and there are many approaches to set it (BIBREF9). Although these methods concentrate in the sample or label, we have had good results with a much more general approach."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2698cc2f9b0efe595bb7c6e0451b3ab487a62c92",
"718bf53fbe9fed729b97279496a83cd7b13cb0f2"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Flair: Flair BIBREF13 with different embeddings (BERT (out of memory), Flair embeddings (forward and backward German)). Such sophisticated language models require much more computational power and many examples per label. This was the case for the subtask A but subtask B was not.",
"FLOAT SELECTED: Table 2: Micro F-1 scores of different approaches on the development set, best four values marked in bold"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Flair: Flair BIBREF13 with different embeddings (BERT (out of memory), Flair embeddings (forward and backward German)). Such sophisticated language models require much more computational power and many examples per label. This was the case for the subtask A but subtask B was not.",
"FLOAT SELECTED: Table 2: Micro F-1 scores of different approaches on the development set, best four values marked in bold"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"372693caca63d5090d2d930b68fbeb31772e17b0",
"dc932cde95206c7afb0153eded31d97dfdfe3c94"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Micro F-1 scores of different approaches on the development set, best four values marked in bold"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Micro F-1 scores of different approaches on the development set, best four values marked in bold"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Does the paper report F1-scores with and without post-processing for the second task?",
"What does post-processing do to the output?",
"Do they test any neural architecture?",
"Is the performance of a Naive Bayes approach evaluated?"
],
"question_id": [
"2b32cf05c5e736f764ceecc08477e20ab9f2f5d7",
"014a3aa07686ee18a86c977bf0701db082e8480b",
"6e6d64e2cb7734599890fff3f10c18479756d540",
"8675d39f1647958faab7fa40cdaab207d4fe5a29"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"German",
"German",
"German",
"German"
],
"topic_background": [
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Table 1: Specs for dataset for subtasks A and B",
"Figure 1: SVM-TF-IDF classifier with ensemble of textual features",
"Figure 2: Threshold/micro F-1 dependency",
"Table 2: Micro F-1 scores of different approaches on the development set, best four values marked in bold",
"Table 3: Confusion matrix between label and others for threshold (t) =0 and =-0.25 (true negative: tp, false negative: fn, false positive: fp, true positive: tp)",
"Table 4: Preliminary experiments on subtask B, best three values marked in bold",
"Table 5: Results of subtask A, best micro F-1 score by team",
"Table 6: Results of subtask B, best micro F-1 score by team"
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png"
]
} | [
"Does the paper report F1-scores with and without post-processing for the second task?",
"What does post-processing do to the output?"
] | [
[
"1908.06493-Experiments ::: Subtask B-0",
"1908.06493-Data and Methodology ::: System Definition ::: Post-processing: Threshold-1",
"1908.06493-Experiments ::: Preliminary Experiments on Development Set-8",
"1908.06493-Data and Methodology ::: System Definition ::: Post-processing: Threshold-0",
"1908.06493-Data and Methodology ::: System Definition ::: Post-processing: Threshold-3"
],
[
"1908.06493-Data and Methodology ::: System Definition ::: Post-processing: Threshold-1",
"1908.06493-Data and Methodology ::: System Definition ::: Post-processing: Threshold-2",
"1908.06493-Data and Methodology ::: System Definition ::: Post-processing: Threshold-3",
"1908.06493-Data and Methodology ::: System Definition ::: Post-processing: Threshold-0"
]
] | [
"With post-processing",
"Set treshold for prediction."
] | 322 |
1909.12208 | An Investigation into the Effectiveness of Enhancement in ASR Training and Test for Chime-5 Dinner Party Transcription | Despite the strong modeling power of neural network acoustic models, speech enhancement has been shown to deliver additional word error rate improvements if multi-channel data is available. However, there has been a longstanding debate whether enhancement should also be carried out on the ASR training data. In an extensive experimental evaluation on the acoustically very challenging CHiME-5 dinner party data we show that: (i) cleaning up the training data can lead to substantial error rate reductions, and (ii) enhancement in training is advisable as long as enhancement in test is at least as strong as in training. This approach stands in contrast and delivers larger gains than the common strategy reported in the literature to augment the training database with additional artificially degraded speech. Together with an acoustic model topology consisting of initial CNN layers followed by factorized TDNN layers we achieve with 41.6 % and 43.2 % WER on the DEV and EVAL test sets, respectively, a new single-system state-of-the-art result on the CHiME-5 data. This is a 8 % relative improvement compared to the best word error rate published so far for a speech recognizer without system combination. | {
"paragraphs": [
[
"Neural networks have outperformed earlier GMM based acoustic models in terms of modeling power and increased robustness to acoustic distortions. Despite that, speech enhancement has been shown to deliver additional WER improvements, if multi-channel data is available. This is due to their ability to exploit spatial information, which is reflected by phase differences of microphone channels in the STFT domain. This information is not accessible by the ASR system, at least not if it operates on the common log mel spectral or cepstral feature sets. Also, dereverberation algorithms have been shown to consistently improve ASR results, since the temporal dispersion of the signal caused by reverberation is difficult to capture by an ASR acoustic model BIBREF0.",
"However, there has been a long debate whether it is advisable to apply speech enhancement on data used for ASR training, because it is generally agreed upon that the recognizer should be exposed to as much acoustic variability as possible during training, as long as this variability matches the test scenario BIBREF1, BIBREF2, BIBREF3. Multi-channel speech enhancement, such as acoustic BF or source separation, would not only reduce the acoustic variability, it would also result in a reduction of the amount of training data by a factor of $M$, where $M$ is the number of microphones BIBREF4. Previous studies have shown the benefit of training an ASR on matching enhanced speech BIBREF5, BIBREF6 or on jointly training the enhancement and the acoustic model BIBREF7. Alternatively, the training data is often artificially increased by adding even more degraded speech to it. For instance, Ko et al. BIBREF8 found that adding simulated reverberated speech improves accuracy significantly on several large vocabulary tasks. Similarly, Manohar et al. BIBREF9 improved the WER of the baseline CHiME-5 system by relative 5.5% by augmenting the training data with approx. 160hrs of simulated reverberated speech. However, not only can the generation of new training data be costly and time consuming, the training process itself is also prolonged if the amount of data is increased.",
"In this contribution we advocate for the opposite approach. Although we still believe in the argument that ASR training should see sufficient variability, instead of adding degraded speech to the training data, we clean up the training data. We make, however, sure that the remaining acoustic variability is at least as large as on the test data. By applying a beamformer to the multi-channel input, we even reduce the amount of training data significantly. Consequently, this leads to cheaper and faster acoustic model training.",
"We perform experiments using data from the CHiME-5 challenge which focuses on distant multi-microphone conversational ASR in real home environments BIBREF10. The CHiME-5 data is heavily degraded by reverberation and overlapped speech. As much as 23% of the time more than one speaker is active at the same time BIBREF11. The challenge's baseline system poor performance (about 80% WER) is an indication that ASR training did not work well. Recently, GSS enhancement on the test data was shown to significantly improve the performance of an acoustic model, which had been trained with a large amount of unprocessed and simulated noisy data BIBREF12. GSS is a spatial mixture model based blind source separation approach which exploits the annotation given in the CHiME-5 database for initialization and, in this way, avoids the frequency permutation problem BIBREF13.",
"We conjectured that cleaning up the training data would enable a more effective acoustic model training for the CHiME-5 scenario. We have therefore experimented with enhancement algorithms of various strengths, from relatively simple beamforming over single-array GSS to a quite sophisticated multi-array GSS approach, and tested all combinations of training and test data enhancement methods. Furthermore, compared to the initial GSS approach in BIBREF13, we describe here some modifications, which led to improved performance. We also propose an improved neural acoustic modeling structure compared to the CHiME-5 baseline system described in BIBREF9. It consists of initial CNN layers followed by TDNN-F layers, instead of a homogeneous TDNN-F architecture.",
"Using a single acoustic model trained with 308hrs of training data, which resulted after applying multi-array GSS data cleaning and a three-fold speed perturbation, we achieved a WER of 41.6% on the development (DEV) and 43.2% on the evaluation (EVAL) test set of CHiME-5, if the test data is also enhanced with multi-array GSS. This compares very favorably with the recently published top-line in BIBREF12, where the single-system best result, i.e., the WER without system combination, was 45.1% and 47.3% on DEV and EVAL, respectively, using an augmented training data set of 4500hrs total.",
"The rest of this paper is structured as follows. Section SECREF2 describes the CHiME-5 corpus, Section SECREF3 briefly presents the guided source separation enhancement method, Section SECREF4 shows the ASR experiments and the results, followed by a discussion in Section SECREF5. Finally, the paper is concluded in Section SECREF6."
],
[
"The CHiME-5 corpus comprises twenty dinner party recordings (sessions) lasting for approximately 2hrs each. A session contains the conversation among the four dinner party participants. Recordings were made in kitchen, dining and living room areas with each phase lasting for a minimum of 30mins. 16 dinner parties were used for training, 2 were used for development, and 2 were used for evaluation.",
"There were two types of recording devices collecting CHiME-5 data: distant 4-channels (linear) Microsoft Kinect arrays (referred to as units or `U') and in-ear Soundman OKM II Classic Studio binaural microphones (referred to as worn microphones or `W'). Six Kinect arrays were used in total and they were placed such that at least two units were able to capture the acoustic environment in each recording area. Each dinner party participant wore in-ear microphones which were subsequently used to facilitate human audio transcription of the data. The devices were not time synchronized during recording. Therefore, the W and the U signals had to be aligned afterwards using a correlation based approach provided by the organizers. Depending on how many arrays were available during test time, the challenge had a single (reference) array and a multiple array track. For more details about the corpus, the reader is referred to BIBREF10."
],
[
"GSS enhancement is a blind source separation technique originally proposed in BIBREF13 to alleviate the speaker overlap problem in CHiME-5. Given a mixture of reverberated overlapped speech, GSS aims to separate the sources using a pure signal processing approach. An EM algorithm estimates the parameters of a spatial mixture model and the posterior probabilities of each speaker being active are used for mask based beamforming.",
"An overview block diagram of this enhancement by source separation is depicted in fig:enhancementblock. It follows the approach presented in BIBREF12, which was shown to outperform the baseline version. The system operates in the STFT domain and consists of two stages: (1) a dereverberation stage, and (2) a guided source separation stage. For the sake of simplicity, the overall system is referred to as GSS for the rest of the paper. Regarding the first stage, the multiple input multiple output version of the WPE method was used for dereverberation ($M$ inputs and $M$ outputs) BIBREF14, BIBREF15 and, regarding the second stage, it consists of a spatial MM BIBREF16 and a source extraction (SE) component. The model has five mixture components, one representing each speaker, and an additional component representing the noise class.",
"The role of the MM is to support the source extraction component for estimating the target speech. The class affiliations computed in the E-step of the EM algorithm are employed to estimate spatial covariance matrices of target signals and interferences, from which the coefficients of an MVDR beamformer are computed BIBREF17. The reference channel for the beamformer is estimated based on an SNR criterionBIBREF18. The beamformer is followed by a postfilter to reduce the remaining speech distortions BIBREF19, which in turn is followed by an additional (optional) masking stage to improve crosstalk suppression. Those masks are also given by the mentioned class affiliations. For the single array (CHiME-5) track, simulations have shown that multiplying the beamformer output with the target speaker mask improves the performance on the U data, but the same approach degrades the performance in the multiple array track BIBREF13. This is because the spatial selectivity of a single array is very limited in CHiME-5: the speakers' signals arrive at the array, which is mounted on the wall at some distance, at very similar impinging angles, rendering single array beamforming rather ineffective. Consequently, additional masking has the potential to improve the beamformer performance. Conversely, the MM estimates are more accurate in the multiple array case since they benefit from a more diverse spatial arrangement of the microphones, and the signal distortions introduced by the additional masking rather degrade the performance. Consequently, for our experiments we have used the masking approach for the single array track, but not for the multiple array one.",
"GSS exploits the baseline CHiME-5 speaker diarization information available from the transcripts (annotations) to determine when multiple speakers talk simultaneously (see fig:activity). This crosstalk information is then used to guide the parameter estimation of the MM both during EM initialization (posterior masks set to one divided by the number of active speakers for active speakers' frames, and zero for the non-active speakers) and after each E-step (posterior masks are clamped to zero for non-active speakers).",
"The initialization of the EM for each mixture component is very important for the correct convergence of the algorithm. If the EM initialization is close enough to the final solution, then it is expected that the algorithm will correctly separate the sources and source indices are not permuted across frequency bins. This has a major practical application, since frequency permutation solvers like BIBREF20 become obsolete.",
"Temporal context also plays an important role in the EM initialization. Simulations have shown that a large context of 15 seconds left and right of the considered segment improves the mixture model estimation performance significantly for CHiME-5 BIBREF13. However, having such a large temporal context may become problematic when the speakers are moving, because the estimated spatial covariance matrix can become outdated due to the movement BIBREF12. Alternatively, one can run the EM first with a larger temporal context until convergence, then drop the context and re-run it for some more iterations. As shown later in the paper, this approach did not improve ASR performance. Therefore, the temporal context was only used for dereverberation and the mixture model parameter estimation, while for the estimation of covariance matrices for beamforming the context was dropped and only the original segment length was considered BIBREF12.",
"Another avenue we have explored for further source separation improvement was to refine the baseline CHiME-5 annotations using ASR output (see fig:enhancementblock). A first-pass decoding using an ASR system is used to predict silence intervals. Then this information is used to adjust the time annotations, which are used in the EM algorithm as described above. When the ASR decoder indicates silence for a speaker, the corresponding class posterior in the MM is forced to zero.",
"Depending on the number of available arrays for CHiME-5, two flavours of GSS enhancement were used in this work. In the single array track, all 4 channels of the array are used as input ($M = 4$), and the system is referred to as GSS1. In the multi array track, all six arrays are stacked to form a 24 channels super-array ($M = 24$), and this system is denoted as GSS6. The baseline time synchronization provided by the challenge organizers was sufficient to align the data for GSS6."
],
[
"Experiments were performed using the CHiME-5 data. Distant microphone recordings (U data) during training and/or testing were processed using the speech enhancement methods depicted in Table TABREF6. Speech was either left unprocessed, enhanced using a weighted delay-and-sum beamformer (BFIt) BIBREF21 with or without dereverberation (WPE), or processed using the guided source separation (GSS) approach described in Section SECREF3. In Table TABREF6, the strength of the enhancement increases from top to bottom, i.e., GSS6 signals are much cleaner than the unprocessed ones.",
"The standard CHiME-5 recipes were used to: (i) train GMM-HMM alignment models, (ii) clean up the training data, and (iii) augment the training data using three-fold speed perturbation. The acoustic feature vector consisted of 40-dimensional MFCCs appended with 100-dimensional i-vectors. By default, the acoustic models were trained using the LF-MMI criterion and a 3-gram language model was used for decoding BIBREF10. Discriminative training (DT) BIBREF22 and an additional RNN-based language model (RNN-LM) BIBREF23 were applied to improve recognition accuracy for the best performing systems."
],
[
"The initial baseline system BIBREF10 of the CHiME-5 challenge uses a TDNN AM. However, recently it has been shown that introducing factorized layers into the TDNN architecture facilitates training deeper networks and also improves the ASR performance BIBREF24. This architecture has been employed in the new baseline system for the challenge BIBREF9. The TDNN-F has 15 layers with a hidden dimension of 1536 and a bottleneck dimension of 160; each layer also has a resnet-style bypass-connection from the output of the previous layer, and a “continuous dropout” schedule BIBREF9. In addition to the TDNN-F, the newly released baseline also uses simulated reverberated speech from worn microphone recordings for augmenting the training set, it employes front-end speech dereverberation and beamforming (WPE+BFIt), as well as robust i-vector extraction using 2-stage decoding.",
"CNN have been previously shown to improve ASR robustness BIBREF25. Therefore, combining CNN and TDNN-F layers is a promising approach to improve the baseline system of BIBREF9. To test this hypothesis, a CNN-TDNNF AM architecture consisting of 6 CNN layers followed by 9 TDNN-F layers was compared against an AM having 15 TDNN-F layers. All TDNN-F layers have the topology described above.",
"ASR results are given in Table TABREF10. The first two rows show that replacing the TDNN-F with the CNN-TDNNF AM yielded more than 2% absolute WER reduction. We also trained another CNN-TDNNF model using only a small subset (worn + 100k utterances from arrays) of training data (about 316hrs in total) which has produced slightly better WERs compared with the baseline TDNN-F trained on a much larger dataset (roughly 1416hrs in total). For consistency, 2-stage decoding was used for all results in Table TABREF10. We conclude that the CNN-TDNNF model outperforms the TDNNF model for the CHiME-5 scenario and, therefore, for the remainder of the paper we only report results using the CNN-TDNNF AM."
],
[
"An extensive set of experiments was performed to measure the WER impact of enhancement on the CHiME-5 training and test data. We test enhancement methods of varying strengths, as described in Section SECREF5, and the results are depicted in Table TABREF12. In all cases, the (unprocessed) worn dataset was also included for AM training since it was found to improve performance (supporting therefore the argument that data variability helps ASR robustness).",
"In Table TABREF12, in each row the recognition accuracy improves monotonically from left to right, i.e., as the enhancement strategy on the test data becomes stronger. Reading the table in each column from top to bottom, one observes that accuracy improves with increasing power of the enhancement on the training data, however, only as long as the enhancement on the training data is not stronger than on the test data. Compared with unprocessed training and test data (None-None), GSS6-GSS6 yields roughly 35% (24%) relative WER reduction on the DEV (EVAL) set, and 12% (11%) relative WER reduction when compared with the None-GSS6 scenario. Comparing the amount of training data used to train the acoustic models, we observe that it decreases drastically from no enhancement to the GSS6 enhancement."
],
[
"To facilitate comparison with the recently published top-line in BIBREF12 (H/UPB), we have conducted a more focused set of experiments whose results are depicted in Table TABREF14. As explained in Section SECREF16, we opted for BIBREF12 instead of BIBREF13 as baseline because the former system is stronger. The experiments include refining the GSS enhancement using time annotations from ASR output (GSS w/ ASR), performing discriminative training on top of the AMs trained with LF-MMI and performing RNN LM rescoring. All the above helped further improve ASR performance. We report performance of our system on both single and multiple array tracks. To have a fair comparison, the results are compared with the single-system performance reported inBIBREF12.",
"For the single array track, the proposed system without RNN LM rescoring achieves 16% (11%) relative WER reduction on the DEV (EVAL) set when compared with System8 in BIBREF12 (row one in Table TABREF14). RNN LM rescoring further helps improve the proposed system performance.",
"For the multi array track, the proposed system without RNN LM rescoring achieved 6% (7%) relative WER reduction on the DEV (EVAL) set when compared with System16 in BIBREF12 (row six in Table TABREF14).",
"We also performed a test using GSS with the oracle alignments (GSS w/ oracle) to assess the potential of time annotation refinement (gray shade lines in Table TABREF14). It can be seen that there is some, however not much room for improvement.",
"Finally, cleaning up the training set not only boosted the recognition performance, but managed to do so using a fraction of the training data in BIBREF12, as shown in Table TABREF15. This translates to significantly faster and cheaper training of acoustic models, which is a major advantage in practice."
],
[
"Our experiments have shown that the temporal context of some GSS components has a significant effect on the WER. Two cases are investigated: (i) partially dropping the temporal context for the EM stage, and (ii) dropping the temporal context for beamforming. The evaluation was conducted with an acoustic model trained on unprocessed speech and the enhancement was applied during test only. Results are depicted in Table TABREF17.",
"The first row corresponds to the GSS configuration in BIBREF13 while the second one corresponds to the GSS configuration in BIBREF12. First two rows show that dropping the temporal context for estimating statistics for beamforming improves ASR accuracy. For the last row, the EM algorithm was run 20 iterations with temporal context, followed by another 10 without context. Since the performance decreased, we concluded that the best configuration for the GSS enhancement in CHiME-5 scenario is using full temporal context for the EM stage and dropping it for the beamforming stage. Consequently, we have chosen system BIBREF12 as baseline in this study since is using the stronger GSS configuration."
],
[
"The results presented so far were overall accuracies on the test set of CHiME-5. However, since speaker overlap is a major issue for these data, it is of interest to investigate the methods' performance as a function of the amount of overlapped speech. Employing the original CHiME-5 annotations, the word distribution of overlapped speech was computed for DEV and EVAL sets (silence portions were not filtered out). The five-bin normalized histogram of the data is plotted in Fig. FIGREF19. Interestingly, the percentage of segments with low overlapped speech is significantly higher for the EVAL than for the DEV set, and, conversely, the number of words with high overlapped speech is considerably lower for the EVAL than for the DEV set. This distribution may explain the difference in performance observed between the DEV and EVAL sets.",
"Based on the distributions in Fig. FIGREF19, the test data was split. Two cases were considered: (a) same enhancement for training and test data (matched case, Table TABREF20), and (b) unprocessed training data and enhanced test data (mismatched case, Table TABREF21). As expected, the WER increases monotonically as the amount of overlap increases in both scenarios, and the recognition accuracy improves as the enhancement method becomes stronger.",
"Graphical representations of WER gains (relative to the unprocessed case) in Tables TABREF20 and TABREF21 are given in Figs. FIGREF22 and FIGREF25. The plots show that as the amount of speaker overlap increases, the accuracy gain (relative to the unprocessed case) of the weaker signal enhancement (BFIt) drops. This is an expected result since BFIt is not a source separation algorithm. Conversely, as the amount of speaker overlap increases, the accuracy gain (relative to None) of the stronger GSS enhancement improves quite significantly. A rather small decrease in accuracy is observed in the mismatched case (Fig. FIGREF25) for GSS1 in the lower overlap regions. As already mentioned in Section SECREF3, this is due to the masking stage. It has previously been observed that using masking for speech enhancement without a cross talker decreases ASR recognition performance. We have also included in Fig. FIGREF25 the GSS1 version without masking (GSS w/o Mask), which indeed yields significant accuracy gains on segments with little overlap. However, since the overall accuracy of GSS1 with masking is higher than the overall gain of GSS1 without masking, GSS w/o mask was not included in the previous experiments."
],
[
"In this paper we performed an extensive experimental evaluation on the acoustically very challenging CHiME-5 dinner party data showing that: (i) cleaning up training data can lead to substantial word error rate reduction, and (ii) enhancement in training is advisable as long as enhancement in test is at least as strong as in training. This approach stands in contrast and delivers larger accuracy gains at a fraction of training data than the common data simulation strategy found in the literature. Using a CNN-TDNNF acoustic model topology along with GSS enhancement refined with time annotations from ASR, discriminative training and RNN LM rescoring, we achieved a new single-system state-of-the-art result on CHiME-5, which is 41.6% (43.2%) on the development (evaluation) set, which is a 8% relative improvement of the word error rate over a comparable system reported so far."
],
[
"Parts of computational resources required in this study were provided by the Paderborn Center for Parallel Computing."
]
],
"section_name": [
"Introduction",
"CHiME-5 corpus description",
"Guided source separation",
"Experiments ::: General configuration",
"Experiments ::: Acoustic model",
"Experiments ::: Enhancement effectiveness for ASR training and test",
"Experiments ::: State-of-the-art single-system for CHiME-5",
"Discussion ::: Temporal context configuration for GSS",
"Discussion ::: Analysis of speaker overlap effect on WER accuracy",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1287a2036437001aceb17e4729aa9b6f22a92aef",
"c729aee014a2d888da8909141f74ecaa7bed0ac7"
],
"answer": [
{
"evidence": [
"Experiments were performed using the CHiME-5 data. Distant microphone recordings (U data) during training and/or testing were processed using the speech enhancement methods depicted in Table TABREF6. Speech was either left unprocessed, enhanced using a weighted delay-and-sum beamformer (BFIt) BIBREF21 with or without dereverberation (WPE), or processed using the guided source separation (GSS) approach described in Section SECREF3. In Table TABREF6, the strength of the enhancement increases from top to bottom, i.e., GSS6 signals are much cleaner than the unprocessed ones.",
"An extensive set of experiments was performed to measure the WER impact of enhancement on the CHiME-5 training and test data. We test enhancement methods of varying strengths, as described in Section SECREF5, and the results are depicted in Table TABREF12. In all cases, the (unprocessed) worn dataset was also included for AM training since it was found to improve performance (supporting therefore the argument that data variability helps ASR robustness).",
"In Table TABREF12, in each row the recognition accuracy improves monotonically from left to right, i.e., as the enhancement strategy on the test data becomes stronger. Reading the table in each column from top to bottom, one observes that accuracy improves with increasing power of the enhancement on the training data, however, only as long as the enhancement on the training data is not stronger than on the test data. Compared with unprocessed training and test data (None-None), GSS6-GSS6 yields roughly 35% (24%) relative WER reduction on the DEV (EVAL) set, and 12% (11%) relative WER reduction when compared with the None-GSS6 scenario. Comparing the amount of training data used to train the acoustic models, we observe that it decreases drastically from no enhancement to the GSS6 enhancement.",
"In this paper we performed an extensive experimental evaluation on the acoustically very challenging CHiME-5 dinner party data showing that: (i) cleaning up training data can lead to substantial word error rate reduction, and (ii) enhancement in training is advisable as long as enhancement in test is at least as strong as in training. This approach stands in contrast and delivers larger accuracy gains at a fraction of training data than the common data simulation strategy found in the literature. Using a CNN-TDNNF acoustic model topology along with GSS enhancement refined with time annotations from ASR, discriminative training and RNN LM rescoring, we achieved a new single-system state-of-the-art result on CHiME-5, which is 41.6% (43.2%) on the development (evaluation) set, which is a 8% relative improvement of the word error rate over a comparable system reported so far."
],
"extractive_spans": [
"we performed an extensive experimental evaluation on the acoustically very challenging CHiME-5 dinner party data"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experiments were performed using the CHiME-5 data. Distant microphone recordings (U data) during training and/or testing were processed using the speech enhancement methods depicted in Table TABREF6. Speech was either left unprocessed, enhanced using a weighted delay-and-sum beamformer (BFIt) BIBREF21 with or without dereverberation (WPE), or processed using the guided source separation (GSS) approach described in Section SECREF3. In Table TABREF6, the strength of the enhancement increases from top to bottom, i.e., GSS6 signals are much cleaner than the unprocessed ones.",
"An extensive set of experiments was performed to measure the WER impact of enhancement on the CHiME-5 training and test data. We test enhancement methods of varying strengths, as described in Section SECREF5, and the results are depicted in Table TABREF12. In all cases, the (unprocessed) worn dataset was also included for AM training since it was found to improve performance (supporting therefore the argument that data variability helps ASR robustness).",
"In Table TABREF12, in each row the recognition accuracy improves monotonically from left to right, i.e., as the enhancement strategy on the test data becomes stronger. Reading the table in each column from top to bottom, one observes that accuracy improves with increasing power of the enhancement on the training data, however, only as long as the enhancement on the training data is not stronger than on the test data. Compared with unprocessed training and test data (None-None), GSS6-GSS6 yields roughly 35% (24%) relative WER reduction on the DEV (EVAL) set, and 12% (11%) relative WER reduction when compared with the None-GSS6 scenario. Comparing the amount of training data used to train the acoustic models, we observe that it decreases drastically from no enhancement to the GSS6 enhancement.",
"In this paper we performed an extensive experimental evaluation on the acoustically very challenging CHiME-5 dinner party data showing that: (i) cleaning up training data can lead to substantial word error rate reduction, and (ii) enhancement in training is advisable as long as enhancement in test is at least as strong as in training."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In Table TABREF12, in each row the recognition accuracy improves monotonically from left to right, i.e., as the enhancement strategy on the test data becomes stronger. Reading the table in each column from top to bottom, one observes that accuracy improves with increasing power of the enhancement on the training data, however, only as long as the enhancement on the training data is not stronger than on the test data. Compared with unprocessed training and test data (None-None), GSS6-GSS6 yields roughly 35% (24%) relative WER reduction on the DEV (EVAL) set, and 12% (11%) relative WER reduction when compared with the None-GSS6 scenario. Comparing the amount of training data used to train the acoustic models, we observe that it decreases drastically from no enhancement to the GSS6 enhancement."
],
"extractive_spans": [
"accuracy improves with increasing power of the enhancement on the training data, however, only as long as the enhancement on the training data is not stronger than on the test data"
],
"free_form_answer": "",
"highlighted_evidence": [
"In Table TABREF12, in each row the recognition accuracy improves monotonically from left to right, i.e., as the enhancement strategy on the test data becomes stronger. Reading the table in each column from top to bottom, one observes that accuracy improves with increasing power of the enhancement on the training data, however, only as long as the enhancement on the training data is not stronger than on the test data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1b4e4b37dfabd0610a297edf80e2fde40861a7e1",
"2e501260f035876738ad359098903eaeff6d7918"
],
"answer": [
{
"evidence": [
"To facilitate comparison with the recently published top-line in BIBREF12 (H/UPB), we have conducted a more focused set of experiments whose results are depicted in Table TABREF14. As explained in Section SECREF16, we opted for BIBREF12 instead of BIBREF13 as baseline because the former system is stronger. The experiments include refining the GSS enhancement using time annotations from ASR output (GSS w/ ASR), performing discriminative training on top of the AMs trained with LF-MMI and performing RNN LM rescoring. All the above helped further improve ASR performance. We report performance of our system on both single and multiple array tracks. To have a fair comparison, the results are compared with the single-system performance reported inBIBREF12.",
"For the single array track, the proposed system without RNN LM rescoring achieves 16% (11%) relative WER reduction on the DEV (EVAL) set when compared with System8 in BIBREF12 (row one in Table TABREF14). RNN LM rescoring further helps improve the proposed system performance.",
"For the multi array track, the proposed system without RNN LM rescoring achieved 6% (7%) relative WER reduction on the DEV (EVAL) set when compared with System16 in BIBREF12 (row six in Table TABREF14).",
"We also performed a test using GSS with the oracle alignments (GSS w/ oracle) to assess the potential of time annotation refinement (gray shade lines in Table TABREF14). It can be seen that there is some, however not much room for improvement.",
"Finally, cleaning up the training set not only boosted the recognition performance, but managed to do so using a fraction of the training data in BIBREF12, as shown in Table TABREF15. This translates to significantly faster and cheaper training of acoustic models, which is a major advantage in practice.",
"FLOAT SELECTED: Table 4: Comparison of reference [13] and proposed (single) systems in terms of WER for the DEV (EVAL) set. Test data enhancement was refined using ASR alignments or oracle alignments.",
"FLOAT SELECTED: Table 5: Comparison of the reference [13] and proposed systems in terms of amount of training data."
],
"extractive_spans": [],
"free_form_answer": "in terms of WER for the DEV (EVAL) set, the single proposed model (GSS1) has higher WER than the multiple proposed model (GSS6) by 7.4% (4.1%). ",
"highlighted_evidence": [
"To facilitate comparison with the recently published top-line in BIBREF12 (H/UPB), we have conducted a more focused set of experiments whose results are depicted in Table TABREF14. As explained in Section SECREF16, we opted for BIBREF12 instead of BIBREF13 as baseline because the former system is stronger. The experiments include refining the GSS enhancement using time annotations from ASR output (GSS w/ ASR), performing discriminative training on top of the AMs trained with LF-MMI and performing RNN LM rescoring. All the above helped further improve ASR performance. We report performance of our system on both single and multiple array tracks. To have a fair comparison, the results are compared with the single-system performance reported inBIBREF12.",
"For the single array track, the proposed system without RNN LM rescoring achieves 16% (11%) relative WER reduction on the DEV (EVAL) set when compared with System8 in BIBREF12 (row one in Table TABREF14). RNN LM rescoring further helps improve the proposed system performance.",
"For the multi array track, the proposed system without RNN LM rescoring achieved 6% (7%) relative WER reduction on the DEV (EVAL) set when compared with System16 in BIBREF12 (row six in Table TABREF14).",
"We also performed a test using GSS with the oracle alignments (GSS w/ oracle) to assess the potential of time annotation refinement (gray shade lines in Table TABREF14). It can be seen that there is some, however not much room for improvement.",
"Finally, cleaning up the training set not only boosted the recognition performance, but managed to do so using a fraction of the training data in BIBREF12, as shown in Table TABREF15. This translates to significantly faster and cheaper training of acoustic models, which is a major advantage in practice.",
"FLOAT SELECTED: Table 4: Comparison of reference [13] and proposed (single) systems in terms of WER for the DEV (EVAL) set. Test data enhancement was refined using ASR alignments or oracle alignments.",
"FLOAT SELECTED: Table 5: Comparison of the reference [13] and proposed systems in terms of amount of training data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To facilitate comparison with the recently published top-line in BIBREF12 (H/UPB), we have conducted a more focused set of experiments whose results are depicted in Table TABREF14. As explained in Section SECREF16, we opted for BIBREF12 instead of BIBREF13 as baseline because the former system is stronger. The experiments include refining the GSS enhancement using time annotations from ASR output (GSS w/ ASR), performing discriminative training on top of the AMs trained with LF-MMI and performing RNN LM rescoring. All the above helped further improve ASR performance. We report performance of our system on both single and multiple array tracks. To have a fair comparison, the results are compared with the single-system performance reported inBIBREF12.",
"FLOAT SELECTED: Table 4: Comparison of reference [13] and proposed (single) systems in terms of WER for the DEV (EVAL) set. Test data enhancement was refined using ASR alignments or oracle alignments."
],
"extractive_spans": [],
"free_form_answer": "WER of the best single system 48.6 (46.7) comapred to 41.6 (43.2) of the best multiple system.",
"highlighted_evidence": [
"To facilitate comparison with the recently published top-line in BIBREF12 (H/UPB), we have conducted a more focused set of experiments whose results are depicted in Table TABREF14.",
"FLOAT SELECTED: Table 4: Comparison of reference [13] and proposed (single) systems in terms of WER for the DEV (EVAL) set. Test data enhancement was refined using ASR alignments or oracle alignments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8c3a590e2601efa8c1d7e41d57d2af429ddb64ac",
"bb6bb4b4968322beda00a4d14646e734b3f43712"
],
"answer": [
{
"evidence": [
"To facilitate comparison with the recently published top-line in BIBREF12 (H/UPB), we have conducted a more focused set of experiments whose results are depicted in Table TABREF14. As explained in Section SECREF16, we opted for BIBREF12 instead of BIBREF13 as baseline because the former system is stronger. The experiments include refining the GSS enhancement using time annotations from ASR output (GSS w/ ASR), performing discriminative training on top of the AMs trained with LF-MMI and performing RNN LM rescoring. All the above helped further improve ASR performance. We report performance of our system on both single and multiple array tracks. To have a fair comparison, the results are compared with the single-system performance reported inBIBREF12."
],
"extractive_spans": [
"BIBREF12 (H/UPB)"
],
"free_form_answer": "",
"highlighted_evidence": [
"To facilitate comparison with the recently published top-line in BIBREF12 (H/UPB), we have conducted a more focused set of experiments whose results are depicted in Table TABREF14. As explained in Section SECREF16, we opted for BIBREF12 instead of BIBREF13 as baseline because the former system is stronger. The experiments include refining the GSS enhancement using time annotations from ASR output (GSS w/ ASR), performing discriminative training on top of the AMs trained with LF-MMI and performing RNN LM rescoring."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 5: Comparison of the reference [13] and proposed systems in terms of amount of training data.",
"Finally, cleaning up the training set not only boosted the recognition performance, but managed to do so using a fraction of the training data in BIBREF12, as shown in Table TABREF15. This translates to significantly faster and cheaper training of acoustic models, which is a major advantage in practice."
],
"extractive_spans": [],
"free_form_answer": "Previous single system state of the art had WER of 58.3 (53.1).",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Comparison of the reference [13] and proposed systems in terms of amount of training data.",
"Finally, cleaning up the training set not only boosted the recognition performance, but managed to do so using a fraction of the training data in BIBREF12, as shown in Table TABREF15."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"26f449ef8237e37ffedfdb437f0fe10aa56324ba",
"980825c66b7eec3dff09d0369bee935235e17218"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 5: Comparison of the reference [13] and proposed systems in terms of amount of training data.",
"Finally, cleaning up the training set not only boosted the recognition performance, but managed to do so using a fraction of the training data in BIBREF12, as shown in Table TABREF15. This translates to significantly faster and cheaper training of acoustic models, which is a major advantage in practice."
],
"extractive_spans": [],
"free_form_answer": "In case of singe model the WER was better by 10.% (6.4%) and in case of multi model it was 3.5% ( 4.1%)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Comparison of the reference [13] and proposed systems in terms of amount of training data.",
"Finally, cleaning up the training set not only boosted the recognition performance, but managed to do so using a fraction of the training data in BIBREF12, as shown in Table TABREF15. This translates to significantly faster and cheaper training of acoustic models, which is a major advantage in practice."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What supports the claim that enhancement in training is advisable as long as enhancement in test is at least as strong as in training?",
"How does this single-system compares to system combination ones?",
"What was previous single-system state of the art result on the CHiME-5 data?",
"How much is error rate reduced by cleaning up training data?"
],
"question_id": [
"14fdc8087f2a62baea9d50c4aa3a3f8310b38d17",
"3d2b5359259cd3518f361d760bacc49d84c40d82",
"26a321e242e58ea5f2ceaf37f26566dd0d0a0da1",
"6920fd470e6a99c859971828e20276a1b9912280"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1: Overview of speech enhancement system with Weighted Prediction Error (WPE) dereverberation, Mixture Model (MM) estimation, Source Extractor (SE) and Automatic Speech Recognition (ASR).",
"Fig. 2: Visualization of time annotations on a fragment of the CHiME-5 data. The grey bars indicate source activity, the inner vertical blue lines denote the utterance boundaries of a segment of speaker P01, and the outer vertical red lines the boundaries of the extended utterance, consisting of the segment and the “context”, on which the mixture model estimation algorithm operates.",
"Table 1: Naming of the speech enhancement methods.",
"Table 2: Comparison of baseline TDNN-F [10] and proposed CNNTDNNF AMs in terms of WER for the DEV (EVAL) set.",
"Table 3: WER results on the DEV (EVAL) set and various combinations of speech enhancement for ASR training and test (CNNTDNNF AM). Amount of training data (hrs) is also specified.",
"Table 4: Comparison of reference [13] and proposed (single) systems in terms of WER for the DEV (EVAL) set. Test data enhancement was refined using ASR alignments or oracle alignments.",
"Table 5: Comparison of the reference [13] and proposed systems in terms of amount of training data.",
"Table 6: WER results using CNN-TDNNF AM trained on unprocessed (None) when some GSS enhancement (test) components ignore the temporal context.",
"Table 7: Breakdown of absolute WER results on the DEV (EVAL) set for the same training and test enhancement (matched case, CNNTDNNF AM).",
"Table 8: Breakdown of absolute WER results on the DEV (EVAL) set for unprocessed training data and various test enhancements (mismatched case, CNN-TDNNF AM).",
"Fig. 3: Word distribution of overlapped speech for the DEV and EVAL sets of CHiME-5.",
"Fig. 4: Relative WER gain for the matched case vs unprocessed, Table 7 row one (CNN-TDNNF AM).",
"Fig. 5: Relative WER gain for the mismatched case vs unprocessed, Table 8 row one (CNN-TDNNF AM trained on unprocessed)."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"5-Table7-1.png",
"5-Table8-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png"
]
} | [
"How does this single-system compares to system combination ones?",
"What was previous single-system state of the art result on the CHiME-5 data?",
"How much is error rate reduced by cleaning up training data?"
] | [
[
"1909.12208-4-Table4-1.png",
"1909.12208-Experiments ::: State-of-the-art single-system for CHiME-5-2",
"1909.12208-Experiments ::: State-of-the-art single-system for CHiME-5-0",
"1909.12208-Experiments ::: State-of-the-art single-system for CHiME-5-3",
"1909.12208-Experiments ::: State-of-the-art single-system for CHiME-5-4",
"1909.12208-4-Table5-1.png",
"1909.12208-Experiments ::: State-of-the-art single-system for CHiME-5-1"
],
[
"1909.12208-Experiments ::: State-of-the-art single-system for CHiME-5-0",
"1909.12208-4-Table5-1.png",
"1909.12208-Experiments ::: State-of-the-art single-system for CHiME-5-4"
],
[
"1909.12208-4-Table5-1.png",
"1909.12208-Experiments ::: State-of-the-art single-system for CHiME-5-4"
]
] | [
"WER of the best single system 48.6 (46.7) comapred to 41.6 (43.2) of the best multiple system.",
"Previous single system state of the art had WER of 58.3 (53.1).",
"In case of singe model the WER was better by 10.% (6.4%) and in case of multi model it was 3.5% ( 4.1%)"
] | 323 |
1805.09821 | A Corpus for Multilingual Document Classification in Eight Languages | Cross-lingual document classification aims at training a document classifier on resources in one language and transferring it to a different language without any additional resources. Several approaches have been proposed in the literature and the current best practice is to evaluate them on a subset of the Reuters Corpus Volume 2. However, this subset covers only few languages (English, German, French and Spanish) and almost all published works focus on the the transfer between English and German. In addition, we have observed that the class prior distributions differ significantly between the languages. We argue that this complicates the evaluation of the multilinguality. In this paper, we propose a new subset of the Reuters corpus with balanced class priors for eight languages. By adding Italian, Russian, Japanese and Chinese, we cover languages which are very different with respect to syntax, morphology, etc. We provide strong baselines for all language transfer directions using multilingual word and sentence embeddings respectively. Our goal is to offer a freely available framework to evaluate cross-lingual document classification, and we hope to foster by these means, research in this important area. | {
"paragraphs": [
[
"There are many tasks in natural language processing which require the classification of sentences or longer paragraphs into a set of predefined categories. Typical applications are for instance topic identification (e.g. sports, news, $\\ldots $ ) or product reviews (positive or negative). There is a large body of research on approaches for document classification. An important aspect to compare these different approaches is the availability of high quality corpora to train and evaluate them. Unfortunately, most of these evaluation tasks focus on the English language only, while there is an ever increasing need to perform document classification in many other languages. One could of course collect and label training data for other languages, but this would be costly and time consuming. An interesting alternative is “cross-lingual document classification”. The underlying idea is to use a representation of the words or whole documents which is independent of the language. By these means, a classifier trained on one language can be transferred to a different one, without the need of resources in that transfer language. Ideally, the performance obtained by cross-lingual transfer should be as close as possible to training the entire system on language specific resources. Such a task was first proposed by BIBREF0 using the Reuters Corpus Volume 2. The aim was to first train a classifier on English and then to transfer it to German, and vice versa. An extension to the transfer between English and French and Spanish respectively was proposed by BIBREF1 . However, only few comparative results are available for these transfer directions.",
"The contributions of this work are as follows. We extend previous works and use the data in the Reuters Corpus Volume 2 to define new cross-lingual document classification tasks for eight very different languages, namely English, French, Spanish, Italian, German, Russian, Chinese and Japanese. For each language, we define a train, development and test corpus. We also provide strong reference results for all transfer directions between the eight languages, e.g. not limited to the transfer between a foreign language and English. We compare two approaches, based either on multilingual word or sentence embeddings respectively. By these means, we hope to define a clear evaluation environment for highly multilingual document classification."
],
[
"The Reuters Corpus Volume 2 BIBREF2 , in short RCV2, is a multilingual corpus with a collection of 487,000 news stories. Each news story was manually classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). Topic codes were assigned to capture the major subject of the news story. The entire corpus covers thirteen languages, i.e. Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish, written by local reporters in each language. The news stories are not parallel. Single-label stories, i.e. those labeled with only one topic out of the four top categories, are often used for evaluations. However, the class distributions vary significantly across all the thirteen languages (see Table 1 ). Therefore, using random samples to extract evaluation corpora may lead to very imbalanced test sets, i.e. undesired and misleading variability among the languages when the main focus is to evaluate cross-lingual transfer."
],
[
"A subset of the English and German sections of RCV2 was defined by BIBREF0 to evaluate cross-lingual document classification. This subset was used in several follow-up works and many comparative results are available for the transfer between German and English. BIBREF1 extended the use of RCV2 for cross-lingual document classification to the French and Spanish language (transfer from and to English). An analysis of these evaluation corpora has shown that the class prior distributions vary significantly between the classes (see Table 2 ). For German and English, more than 80% of the examples in the test set belong to the classes GCAT and MCAT and at most 2% to the class CCAT. These class prior distributions are very different for French and Spanish: the class CCAT is quite frequent with 21% and 15% of the French and Spanish test set respectively. One may of course argue that variability in the class prior distribution is typical for real-world problems, but this shifts the focus from a high quality cross-lingual transfer to “tricks” for how to best handle the class imbalance. Indeed, in previous research the transfer between English and German achieves accuracies higher than 90%, while the performance is below 80% for EN/FR or even 70% EN/ES. We have seen experimental evidence that these important differences are likely to be caused by the discrepancy in the class priors of the test sets."
],
[
"In this work, we propose a new evaluation framework for highly multilingual document classification which significantly extends the current state. We continue to use Reuters Corpus Volume 2, but based on the above mentioned limitations of the current subset of RCV2, we propose new tasks for cross-lingual document classification. The design choices are as follow:",
"Uniform class coverage: we sample from RCV2 the same number of examples for each class and language;",
"Split the data into train, development and test corpus: for each languages, we provide training data of different sizes (1k, 2k, 5k and 10k stories), a development (1k) and a test corpus (4k);",
"Support more languages: German (DE), English (EN), Spanish (ES), French (FR), Italian (IT), Japanese (JA), Russian (RU) and Chinese (ZH). Reference baseline results are available for all languages.",
"Most works in the literature use only 1 000 examples to train the document classifier. To invest the impact of more training data, we also provide training corpora of 2 000, 5 000 and 10 000 documents. The development corpus for each language is composed of 1 000 and the test set of 4 000 documents respectively. All have uniform class distributions. An important aspect of this work is to provide a framework to study and evaluate cross-lingual document classification for many language pairs. In that spirit, we will name this corpus “Multilingual Document Classification Corpus”, abbreviated as MLDoc. The full Reuters Corpus Volume 2 has a special license and we can not distribute it ourselves. Instead, we provide tools to extract all the subsets of MLDoc at https://github.com/facebookresearch/MLDoc."
],
[
"In this section, we provide comparative results on our new Multilingual Document Classification Corpus. Since the initial work by BIBREF0 many alternative approaches to cross-lingual document classification have been developed. We will encourage the respective authors to evaluate their systems on MLDoc. We believe that a large variety of transfer language pairs will give valuable insights on the performance of the various approaches.",
"In this paper, we propose initial strong baselines which represent two complementary directions of research: one based on the aggregation of multilingual word embeddings, and another one, which directly learns multilingual sentence representations. Details on each approach are given in section \"Multilingual word representations\" and \"Multilingual sentence representations\" respectively. In contrast to previous works on cross-lingual document classification with RVC2, we explore training the classifier on all languages and transfer it to all others, ie. we do not limit our study to the transfer between English and a foreign language.",
"One can envision several ways to define cross-lingual document classification, in function of the resources which are used in the source and transfer language (see Table 3 ). The first scheme assumes that we have no resources in the transfer language at all, neither labeled nor unlabeled. We will name this case “zero-shot cross-lingual document classification”. To simplify the presentation, we will assume that we transfer from English to German. The training and evaluation protocol is as follows. First, train a classifier using resources in the source language only, eg. the training and development corpus are in English. All meta parameters and model choices are performed using the English development corpus. Once the best performing model is selected, it is applied to the transfer language, eg. the German test set. Since no resources of the transfer language are used, the same system can be applied to many different transfer languages. This type of cross-lingual document classification needs a very strong multilingual representation since no knowledge on the target language was used during the development of the classifier.",
"In a second class of cross-lingual document classification, we may aim in improving the transfer performance by using a limited amount of resources in the target language. In the framework of the proposed MLDoc we will use the development corpus of target language for model selection. We will name this method “targeted cross-lingual document classification” since the system is tailored to one particular transfer language. It is unlikely that this system will perform well on other languages than the ones used for training or model selection.",
"If the goal is to build one document classification system for many languages, it may be interesting to use already several languages during training and model selection. To allow a fair comparison, we will assume that these multilingual resources have the same size than the ones used for zero-shot or targeted cross-language document classification, e.g. a training set composed of five languages with 200 examples each. This type of training is not a cross-lingual approach any more. Consequently, we will refer to this method as “joint multilingual document classification”."
],
[
"Several works have been proposed to learn multilingual word embeddings, which are then combined to perform cross-lingual document classifications. These word embeddings are trained on either word alignments or sentence-aligned parallel corpora. To provide reproducible benchmark results, we use MultiCCA word embeddings published by BIBREF3 .",
"There are multiple ways to combine these word embeddings for classification. We train a simple one-layer convolutional neural network (CNN) on top of the word embeddings, which has shown to perform well on text classification tasks regardless of training data size BIBREF4 . Specifically, convolutional filters are applied to windows of word embeddings, with a max-over-time pooling on top of them. We freeze the multilingual word embeddings while only training the classifier. Hyper-parameters such as convolutional output dimension, window sizes are done by grid search over the Dev set of the same language as the train set."
],
[
"A second direction of research is to directly learn multilingual sentence representations. In this paper, we evaluate a recently proposed technique to learn joint multilingual sentence representations BIBREF5 . The underlying idea is to use multiple sequence encoders and decoders and to train them with aligned corpora from the machine translation community. The goal is that all encoders share the same sentence representation, i.e. we map all languages into one common space. A detailed description of this approach can be found in BIBREF5 . We have developed two versions of the system: one trained on the Europarl corpus BIBREF6 to cover the languages English, German, French, Spanish and Italian, and another one trained on the United Nations corpus BIBREF7 which allows to learn a joint sentence embedding for English, French, Spanish, Russian and Chinese. We use a one hidden-layer MLP as classifier. For comparison, we have evaluated its performance on the original subset of RCV2 as used in previous publications on cross-lingual document classification: we are able to outperform the current state-of-the-art in three out of six transfer directions."
],
[
"The classification accuracy for zero-shot transfer on the test set of our Multilingual Document Classification Corpus are summarized in Table 4 . The classifiers based on the MultiCCA embeddings perform very well on the development corpus (accuracies close or exceeding 90%). The system trained on English also achieves excellent results when transfered to a different languages, it scores best for three out of seven languages (DE, IT and ZH). However, the transfer accuracies are quite low when training the classifiers on other languages than English, in particular for Russian, Chinese and Japanese.",
"The systems using multilingual sentence embeddings seem to be overall more robust and less language specific. They score best for four out of seven languages (EN, ES, FR and RU). Training on German or French actually leads to better transfer performance than training on English. Cross-lingual transfer between very different languages like Chinese and Russian also achieves remarkable results."
],
[
"The classification accuracy for targeted transfer are summarized in Table 5 . Due to space constraints, we provide only the results for multilingual sentence embeddings and five target languages. Not surprisingly, targeting the classifier to the transfer language can lead to important improvements, in particular when training on Italian."
],
[
"The classification accuracies for joint multilingual training are given in Table 6 . We use a multilingual train and Dev corpus composed of 200 examples of each of the five languages. One could argue that the data collection and annotation cost for such a corpus would be the same than producing a corpus of the same size in one language only. This leads to important improvement for all languages, in comparison to zero-shot or targeted transfer learning."
],
[
"We have defined a new evaluation framework for cross-lingual document classification in eight languages. This corpus largely extends previous corpora which were also based on the Reuters Corpus Volume 2, but mainly considered the transfer between English and German. We also provide detailed baseline results using two competitive approaches (multilingual word and sentence embeddings, respectively), for cross-lingual document classification between all eight languages. This new evaluation framework is freely available at https://github.com/facebookresearch/MLDoc."
]
],
"section_name": [
"Introduction",
"Corpus description",
"Cross-lingual document classification",
"Multilingual document classification",
"Baseline results",
"Multilingual word representations",
"Multilingual sentence representations",
"Zero-shot cross-lingual document classification",
"Targeted cross-lingual document classification",
"Joint multilingual document classification",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"12e4ae032ac569d1502ff69234090cd3532cdc97",
"d2512760ee49d4d95d698125ee7196012a4f2f88"
],
"answer": [
{
"evidence": [
"In this paper, we propose initial strong baselines which represent two complementary directions of research: one based on the aggregation of multilingual word embeddings, and another one, which directly learns multilingual sentence representations. Details on each approach are given in section \"Multilingual word representations\" and \"Multilingual sentence representations\" respectively. In contrast to previous works on cross-lingual document classification with RVC2, we explore training the classifier on all languages and transfer it to all others, ie. we do not limit our study to the transfer between English and a foreign language."
],
"extractive_spans": [
"aggregation of multilingual word embeddings",
"multilingual sentence representations"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we propose initial strong baselines which represent two complementary directions of research: one based on the aggregation of multilingual word embeddings, and another one, which directly learns multilingual sentence representations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Several works have been proposed to learn multilingual word embeddings, which are then combined to perform cross-lingual document classifications. These word embeddings are trained on either word alignments or sentence-aligned parallel corpora. To provide reproducible benchmark results, we use MultiCCA word embeddings published by BIBREF3 .",
"A second direction of research is to directly learn multilingual sentence representations. In this paper, we evaluate a recently proposed technique to learn joint multilingual sentence representations BIBREF5 . The underlying idea is to use multiple sequence encoders and decoders and to train them with aligned corpora from the machine translation community. The goal is that all encoders share the same sentence representation, i.e. we map all languages into one common space. A detailed description of this approach can be found in BIBREF5 . We have developed two versions of the system: one trained on the Europarl corpus BIBREF6 to cover the languages English, German, French, Spanish and Italian, and another one trained on the United Nations corpus BIBREF7 which allows to learn a joint sentence embedding for English, French, Spanish, Russian and Chinese. We use a one hidden-layer MLP as classifier. For comparison, we have evaluated its performance on the original subset of RCV2 as used in previous publications on cross-lingual document classification: we are able to outperform the current state-of-the-art in three out of six transfer directions.",
"In this paper, we propose initial strong baselines which represent two complementary directions of research: one based on the aggregation of multilingual word embeddings, and another one, which directly learns multilingual sentence representations. Details on each approach are given in section \"Multilingual word representations\" and \"Multilingual sentence representations\" respectively. In contrast to previous works on cross-lingual document classification with RVC2, we explore training the classifier on all languages and transfer it to all others, ie. we do not limit our study to the transfer between English and a foreign language."
],
"extractive_spans": [
"we use MultiCCA word embeddings published by BIBREF3",
"joint multilingual sentence representations"
],
"free_form_answer": "",
"highlighted_evidence": [
"To provide reproducible benchmark results, we use MultiCCA word embeddings published by BIBREF3 .",
"In this paper, we evaluate a recently proposed technique to learn joint multilingual sentence representations BIBREF5 . ",
"In this paper, we propose initial strong baselines which represent two complementary directions of research: one based on the aggregation of multilingual word embeddings, and another one, which directly learns multilingual sentence representations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7fa8d8b1eb8a1630feb99a8e11ebfa501ac5bc3c",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"d13ea2a46302945e2e454757ab8436c09cb12cf1",
"d6e9b0c97ceafd9411594ccc56337d3ffd802afc"
],
"answer": [
{
"evidence": [
"Split the data into train, development and test corpus: for each languages, we provide training data of different sizes (1k, 2k, 5k and 10k stories), a development (1k) and a test corpus (4k);",
"Most works in the literature use only 1 000 examples to train the document classifier. To invest the impact of more training data, we also provide training corpora of 2 000, 5 000 and 10 000 documents. The development corpus for each language is composed of 1 000 and the test set of 4 000 documents respectively. All have uniform class distributions. An important aspect of this work is to provide a framework to study and evaluate cross-lingual document classification for many language pairs. In that spirit, we will name this corpus “Multilingual Document Classification Corpus”, abbreviated as MLDoc. The full Reuters Corpus Volume 2 has a special license and we can not distribute it ourselves. Instead, we provide tools to extract all the subsets of MLDoc at https://github.com/facebookresearch/MLDoc.",
"We have defined a new evaluation framework for cross-lingual document classification in eight languages. This corpus largely extends previous corpora which were also based on the Reuters Corpus Volume 2, but mainly considered the transfer between English and German. We also provide detailed baseline results using two competitive approaches (multilingual word and sentence embeddings, respectively), for cross-lingual document classification between all eight languages. This new evaluation framework is freely available at https://github.com/facebookresearch/MLDoc."
],
"extractive_spans": [],
"free_form_answer": "larger",
"highlighted_evidence": [
"Split the data into train, development and test corpus: for each languages, we provide training data of different sizes (1k, 2k, 5k and 10k stories), a development (1k) and a test corpus (4k);",
"Most works in the literature use only 1 000 examples to train the document classifier. To invest the impact of more training data, we also provide training corpora of 2 000, 5 000 and 10 000 documents. ",
"This corpus largely extends previous corpora which were also based on the Reuters Corpus Volume 2, but mainly considered the transfer between English and German. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"35491e1e579f6d147f4793edce4c1a80ab2410e7"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which baselines were they used for evaluation?",
"What is the difference in size compare to the previous model?"
],
"question_id": [
"f741d32b92630328df30f674af16fbbefcad3f93",
"fe7f7bcf37ca964b4dc9e9c7ebf35286e1ee042b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"multilingual classification",
"multilingual classification"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Class distribution of all single-label stories per language of the entire Reuters Corpus Volume 2.",
"Table 2: Class distribution of the test set of the RCV2 subsets as used in previous publications on cross-lingual document classification.",
"Table 3: Different schemes of cross- and multilingual document classification.",
"Table 4: Baseline classification accuracies for zero-shot transfer on the test set of the proposed Multilingual Document Classification Corpus. All classifiers were trained on 1 000 news stories and model selection is performed on the Dev corpus of the training language. The same system is then applied to all test languages. Underlined scores indicate the best result on each transfer language for each group, bold scores the overall best accuracy, and italic ones the second best results.",
"Table 5: Baseline classification accuracies for targeted transfer on the test set of the proposed MLDoc. All classifiers were trained on 1 000 news stories and model selection is performed on the Dev corpus of the target language. Each entry corresponds to a specifically optimized system.",
"Table 6: Baseline classification accuracies on the test set of the proposed MLDoc for joint multilingual training. Train and test sets are composed of 200 examples form each of the five languages."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"2-Table3-1.png",
"3-Table4-1.png",
"4-Table5-1.png",
"4-Table6-1.png"
]
} | [
"What is the difference in size compare to the previous model?"
] | [
[
"1805.09821-Multilingual document classification-4",
"1805.09821-Conclusion-0",
"1805.09821-Multilingual document classification-2"
]
] | [
"larger"
] | 324 |
1707.07212 | "i have a feeling trump will win..................": Forecasting Winners and Losers from User Predictions on Twitter | Social media users often make explicit predictions about upcoming events. Such statements vary in the degree of certainty the author expresses toward the outcome:"Leonardo DiCaprio will win Best Actor"vs."Leonardo DiCaprio may win"or"No way Leonardo wins!". Can popular beliefs on social media predict who will win? To answer this question, we build a corpus of tweets annotated for veridicality on which we train a log-linear classifier that detects positive veridicality with high precision. We then forecast uncertain outcomes using the wisdom of crowds, by aggregating users' explicit predictions. Our method for forecasting winners is fully automated, relying only on a set of contenders as input. It requires no training data of past outcomes and outperforms sentiment and tweet volume baselines on a broad range of contest prediction tasks. We further demonstrate how our approach can be used to measure the reliability of individual accounts' predictions and retrospectively identify surprise outcomes. | {
"paragraphs": [
[
"In the digital era we live in, millions of people broadcast their thoughts and opinions online. These include predictions about upcoming events of yet unknown outcomes, such as the Oscars or election results. Such statements vary in the extent to which their authors intend to convey the event will happen. For instance, (a) in Table TABREF2 strongly asserts the win of Natalie Portman over Meryl Streep, whereas (b) imbues the claim with uncertainty. In contrast, (c) does not say anything about the likelihood of Natalie Portman winning (although it clearly indicates the author would like her to win).",
"Prior work has made predictions about contests such as NFL games BIBREF0 and elections using tweet volumes BIBREF1 or sentiment analysis BIBREF2 , BIBREF3 . Many such indirect signals have been shown useful for prediction, however their utility varies across domains. In this paper we explore whether the “wisdom of crowds\" BIBREF4 , as measured by users' explicit predictions, can predict outcomes of future events. We show how it is possible to accurately forecast winners, by aggregating many individual predictions that assert an outcome. Our approach requires no historical data about outcomes for training and can directly be adapted to a broad range of contests.",
"To extract users' predictions from text, we present TwiVer, a system that classifies veridicality toward future contests with uncertain outcomes. Given a list of contenders competing in a contest (e.g., Academy Award for Best Actor), we use TwiVer to count how many tweets explicitly assert the win of each contender. We find that aggregating veridicality in this way provides an accurate signal for predicting outcomes of future contests. Furthermore, TwiVer allows us to perform a number of novel qualitative analyses including retrospective detection of surprise outcomes that were not expected according to popular belief (Section SECREF48 ). We also show how TwiVer can be used to measure the number of correct and incorrect predictions made by individual accounts. This provides an intuitive measurement of the reliability of an information source (Section SECREF55 )."
],
[
"In this section we summarize related work on text-driven forecasting and computational models of veridicality.",
"Text-driven forecasting models BIBREF5 predict future response variables using text written in the present: e.g., forecasting films' box-office revenues using critics' reviews BIBREF6 , predicting citation counts of scientific articles BIBREF7 and success of literary works BIBREF8 , forecasting economic indicators using query logs BIBREF9 , improving influenza forecasts using Twitter data BIBREF10 , predicting betrayal in online strategy games BIBREF11 and predicting changes to a knowledge-graph based on events mentioned in text BIBREF12 . These methods typically require historical data for fitting model parameters, and may be sensitive to issues such as concept drift BIBREF13 . In contrast, our approach does not rely on historical data for training; instead we forecast outcomes of future events by directly extracting users' explicit predictions from text.",
"Prior work has also demonstrated that user sentiment online directly correlates with various real-world time series, including polling data BIBREF2 and movie revenues BIBREF14 . In this paper, we empirically demonstrate that veridicality can often be more predictive than sentiment (Section SECREF40 ).",
"Also related is prior work on detecting veridicality BIBREF15 , BIBREF16 and sarcasm BIBREF17 . Soni et al. soni2014modeling investigate how journalists frame quoted content on Twitter using predicates such as think, claim or admit. In contrast, our system TwiVer, focuses on the author's belief toward a claim and direct predictions of future events as opposed to quoted content.",
"Our approach, which aggregates predictions extracted from user-generated text is related to prior work that leverages explicit, positive veridicality, statements to make inferences about users' demographics. For example, Coppersmith et al. coppersmith2014measuring,coppersmith2015adhd exploit users' self-reported statements of diagnosis on Twitter."
],
[
"The first step of our approach is to extract statements that make explicit predictions about unknown outcomes of future events. We focus specifically on contests which we define as events planned to occur on a specific date, where a number of contenders compete and a single winner is chosen. For example, Table TABREF3 shows the contenders for Best Actor in 2016, highlighting the winner.",
"To explore the accuracy of user predictions in social media, we gathered a corpus of tweets that mention events belonging to one of the 10 types listed in Table TABREF17 . Relevant messages were collected by formulating queries to the Twitter search interface that include the name of a contender for a given contest in conjunction with the keyword win. We restricted the time range of the queries to retrieve only messages written before the time of the contest to ensure that outcomes were unknown when the tweets were written. We include 10 days of data before the event for the presidential primaries and the final presidential elections, 7 days for the Oscars, Ballon d'Or and Indian general elections, and the period between the semi-finals and the finals for the sporting events. Table TABREF15 shows several example queries to the Twitter search interface which were used to gather data. We automatically generated queries, using templates, for events scraped from various websites: 483 queries were generated for the presidential primaries based on events scraped from ballotpedia , 176 queries were generated for the Oscars, 18 for Ballon d'Or, 162 for the Eurovision contest, 52 for Tennis Grand Slams, 6 for the Rugby World Cup, 18 for the Cricket World Cup, 12 for the Football World Cup, 76 for the 2016 US presidential elections, and 68 queries for the 2014 Indian general elections. ",
"We added an event prefix (e.g., “Oscars\" or the state for presidential primaries), a keyword (“win\"), and the relevant date range for the event. For example, “Oscars Leonardo DiCaprio win since:2016-2-22 until:2016-2-28\" would be the query generated for the first entry in Table TABREF3 .",
"We restricted the data to English tweets only, as tagged by langid.py BIBREF18 . Jaccard similarity was computed between messages to identify and remove duplicates. We removed URLs and preserved only tweets that mention contenders in the text. This automatic post-processing left us with 57,711 tweets for all winners and 55,558 tweets for losers (contenders who did not win) across all events. Table TABREF17 gives the data distribution across event categories."
],
[
"We obtained veridicality annotations on a sample of the data using Amazon Mechanical Turk. For each tweet, we asked Turkers to judge veridicality toward a candidate winning as expressed in the tweet as well as the author's desire toward the event. For veridicality, we asked Turkers to rate whether the author believes the event will happen on a 1-5 scale (“Definitely Yes\", “Probably Yes\", “Uncertain about the outcome\", “Probably No\", “Definitely No\"). We also added a question about the author's desire toward the event to make clear the difference between veridicality and desire. For example, “I really want Leonardo to win at the Oscars!\" asserts the author's desire toward Leonardo winning, but remains agnostic about the likelihood of this outcome, whereas “Leonardo DiCaprio will win the Oscars\" is predicting with confidence that the event will happen.",
"Figure FIGREF4 shows the annotation interface presented to Turkers. Each HIT contained 10 tweets to be annotated. We gathered annotations for INLINEFORM0 tweets for winners and INLINEFORM1 tweets for losers, giving us a total of INLINEFORM2 tweets. We paid $0.30 per HIT. The total cost for our dataset was $1,000. Each tweet was annotated by 7 Turkers. We used MACE BIBREF19 to resolve differences between annotators and produce a single gold label for each tweet.",
"Figures FIGREF18 and FIGREF18 show heatmaps of the distribution of annotations for the winners for the Oscars in addition to all categories. In both instances, most of the data is annotated with “Definitely Yes\" and “Probably Yes\" labels for veridicality. Figures FIGREF18 and FIGREF18 show that the distribution is more diverse for the losers. Such distributions indicate that the veridicality of crowds' statements could indeed be predictive of outcomes. We provide additional evidence for this hypothesis using automatic veridicality classification on larger datasets in § SECREF4 ."
],
[
"The goal of our system, TwiVer, is to automate the annotation process by predicting how veridical a tweet is toward a candidate winning a contest: is the candidate deemed to be winning, or is the author uncertain? For the purpose of our experiments, we collapsed the five labels for veridicality into three: positive veridicality (“Definitely Yes\" and “Probably Yes\"), neutral (“Uncertain about the outcome\") and negative veridicality (“Definitely No\" and “Probably No\").",
"We model the conditional distribution over a tweet's veridicality toward a candidate INLINEFORM0 winning a contest against a set of opponents, INLINEFORM1 , using a log-linear model: INLINEFORM2 ",
"where INLINEFORM0 is the veridicality (positive, negative or neutral).",
"To extract features INLINEFORM0 , we first preprocessed tweets retrieved for a specific event to identify named entities, using BIBREF20 's Twitter NER system. Candidate ( INLINEFORM1 ) and opponent entities were identified in the tweet as follows:",
"- target ( INLINEFORM0 ). A target is a named entity that matches a contender name from our queries.",
"- opponent ( INLINEFORM0 ). For every event, along with the current target entity, we also keep track of other contenders for the same event. If a named entity in the tweet matches with one of other contenders, it is labeled as opponent.",
"- entity ( INLINEFORM0 ): Any named entity which does not match the list of contenders. Figure FIGREF25 illustrates the named entity labeling for a tweet obtained from the query “Oscars Leonardo DiCaprio win since:2016-2-22 until:2016-2-28\". Leonardo DiCaprio is the target, while the named entity tag for Bryan Cranston, one of the losers for the Oscars, is re-tagged as opponent. These tags provide information about the position of named entities relative to each other, which is used in the features."
],
[
"We use five feature templates: context words, distance between entities, presence of punctuation, dependency paths, and negated keyword.",
"Target and opponent contexts. For every target ( INLINEFORM0 ) and opponent ( INLINEFORM1 ) entities in the tweet, we extract context words in a window of one to four words to the left and right of the target (“Target context\") and opponent (“Opponent context\"), e.g., INLINEFORM2 will win, I'm going with INLINEFORM3 , INLINEFORM4 will win.",
"Keyword context. For target and opponent entities, we also extract words between the entity and our specified keyword ( INLINEFORM0 ) (win in our case): INLINEFORM1 predicted to INLINEFORM2 , INLINEFORM3 might INLINEFORM4 .",
"Pair context. For the election type of events, in which two target entities are present (contender and state. e.g., Clinton, Ohio), we extract words between these two entities: e.g., INLINEFORM0 will win INLINEFORM1 .",
"Distance to keyword. We also compute the distance of target and opponent entities to the keyword.",
"We introduce two binary features for the presence of exclamation marks and question marks in the tweet. We also have features which check whether a tweet ends with an exclamation mark, a question mark or a period. Punctuation, especially question marks, could indicate how certain authors are of their claims.",
"We retrieve dependency paths between the two target entities and between the target and keyword (win) using the TweeboParser BIBREF21 after applying rules to normalize paths in the tree (e.g., “doesn't\" INLINEFORM0 “does not\").",
"We check whether the keyword is negated (e.g., “not win\", “never win\"), using the normalized dependency paths.",
"We randomly divided the annotated tweets into a training set of 2,480 tweets, a development set of 354 tweets and a test set of 709 tweets. MAP parameters were fit using LBFGS-B BIBREF22 . Table TABREF29 provides examples of high-weight features for positive and negative veridicality."
],
[
"We evaluated TwiVer's precision and recall on our held-out test set of 709 tweets. Figure FIGREF26 shows the precision/recall curve for positive veridicality. By setting a threshold on the probability score to be greater than INLINEFORM0 , we achieve a precision of INLINEFORM1 and a recall of INLINEFORM2 in identifying tweets expressing a positive veridicality toward a candidate winning a contest."
],
[
"To assess the robustness of the veridicality classifier when applied to new types of events, we compared its performance when trained on all events vs. holding out one category for testing. Table TABREF37 shows the comparison: the second and third columns give F1 score when training on all events vs. removing tweets related to the category we are testing on. In most cases we see a relatively modest drop in performance after holding out training data from the target event category, with the exception of elections. This suggests our approach can be applied to new event types without requiring in-domain training data for the veridicality classifier."
],
[
"Table TABREF33 shows some examples which TwiVer incorrectly classifies. These errors indicate that even though shallow features and dependency paths do a decent job at predicting veridicality, deeper text understanding is needed for some cases. The opposition between “the heart ...the mind\" in the first example is not trivial to capture. Paying attention to matrix clauses might be important too (as shown in the last tweet “There is no doubt ...\")."
],
[
"We now have access to a classifier that can automatically detect positive veridicality predictions about a candidate winning a contest. This enables us to evaluate the accuracy of the crowd's wisdom by retrospectively comparing popular beliefs (as extracted and aggregated by TwiVer) against known outcomes of contests.",
"We will do this for each award category (Best Actor, Best Actress, Best Film and Best Director) in the Oscars from 2009 – 2016, for every state for both Republican and Democratic parties in the 2016 US primaries, for both the candidates in every state for the final 2016 US presidential elections, for every country in the finals of Eurovision song contest, for every contender for the Ballon d'Or award, for every party in every state for the 2014 Indian general elections, and for the contenders in the finals for all sporting events."
],
[
"A simple voting mechanism is used to predict contest outcomes: we collect tweets about each contender written before the date of the event, and use TwiVer to measure the veridicality of users' predictions toward the events. Then, for each contender, we count the number of tweets that are labeled as positive with a confidence above 0.64, as well as the number of tweets with positive veridicality for all other contenders. Table TABREF42 illustrates these counts for one contest, the Oscars Best Actress in 2014.",
"We then compute a simple prediction score, as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the set of tweets mentioning positive veridicality predictions toward candidate INLINEFORM1 , and INLINEFORM2 is the set of all tweets predicting any opponent will win. For each contest, we simply predict as winner the contender whose score is highest."
],
[
"We compare the performance of our approach against a state-of-the-art sentiment baseline BIBREF23 . Prior work on social media analysis used sentiment to make predictions about real-world outcomes. For instance, BIBREF2 correlated sentiment with public opinion polls and BIBREF1 use political sentiment to make predictions about outcomes in German elections.",
"We use a re-implementation of BIBREF23 's system to estimate sentiment for tweets in our corpus. We run the tweets obtained for every contender through the sentiment analysis system to obtain a count of positive labels. Sentiment scores are computed analogously to veridicality using Equation ( EQREF43 ). For each contest, the contender with the highest sentiment prediction score is predicted as the winner."
],
[
"We also compare our approach against a simple frequency (tweet volume) baseline. For every contender, we compute the number of tweets that has been retrieved. Frequency scores are computed in the same way as for veridicality and sentiment using Equation ( EQREF43 ). For every contest, the contender with the highest frequency score is selected to be the winner."
],
[
"Table TABREF34 gives the precision, recall and max-F1 scores for veridicality, sentiment and volume-based forecasts on all the contests. The veridicality-based approach outperforms sentiment and volume-based approaches on 9 of the 10 events considered. For the Tennis Grand Slam, the three approaches perform poorly. The difference in performance for the veridicality approach is quite lower for the Tennis events than for the other events. It is well known however that winners of tennis tournaments are very hard to predict. The performance of the players in the last minutes of the match are decisive, and even professionals have a difficult time predicting tennis winners.",
"Table TABREF39 shows the 10 top predictions made by the veridicality and sentiment-based systems on two of the events we considered - the Oscars and the presidential primaries, highlighting correct predictions."
],
[
"In addition to providing a general method for forecasting contest outcomes, our approach based on veridicality allows us to perform several novel analyses including retrospectively identifying surprise outcomes that were unexpected according to popular beliefs.",
"In Table TABREF39 , we see that the veridicality-based approach incorrectly predicts The Revenant as winning Best Film in 2016. This makes sense, because the film was widely expected to win at the time, according to popular belief. Numerous sources in the press, , , qualify The Revenant not winning an Oscar as a big surprise.",
"Similarly, for the primaries, the two incorrect predictions made by the veridicality-based approach were surprise losses. News articles , , indeed reported the loss of Maine for Trump and the loss of Indiana for Clinton as unexpected."
],
[
"Another nice feature of our approach based on veridicality is that it immediately provides an intuitive assessment on the reliability of individual Twitter accounts' predictions. For a given account, we can collect tweets about past contests, and extract those which exhibit positive veridicality toward the outcome, then simply count how often the accounts were correct in their predictions.",
"As proof of concept, we retrieved within our dataset, the user names of accounts whose tweets about Ballon d'Or contests were classified as having positive veridicality. Table TABREF56 gives accounts that made the largest number of correct predictions for Ballon d'Or awards between 2010 to 2016, sorted by users' prediction accuracy. Usernames of non-public figures are anonymized (as user 1, etc.) in the table. We did not extract more data for these users: we only look at the data we had already retrieved. Some users might not make predictions for all contests, which span 7 years.",
"Accounts like “goal_ghana\", “breakingnewsnig\" and “1Mrfutball\", which are automatically identified by our analysis, are known to post tweets predominantly about soccer."
],
[
"In this paper, we presented TwiVer, a veridicality classifier for tweets which is able to ascertain the degree of veridicality toward future contests. We showed that veridical statements on Twitter provide a strong predictive signal for winners on different types of events, and that our veridicality-based approach outperforms a sentiment and frequency baseline for predicting winners. Furthermore, our approach is able to retrospectively identify surprise outcomes. We also showed how our approach enables an intuitive yet novel method for evaluating the reliability of information sources."
],
[
"We thank our anonymous reviewers for their valuable feedback. We also thank Wei Xu, Brendan O'Connor and the Clippers group at The Ohio State University for useful suggestions. This material is based upon work supported by the National Science Foundation under Grants No. IIS-1464128 to Alan Ritter and IIS-1464252 to Marie-Catherine de Marneffe. Alan Ritter is supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center in addition to the Office of the Director of National Intelligence (ODNI) and the Intelligence Advanced Research Projects Activity (IARPA) via the Air Force Research Laboratory (AFRL) contract number FA8750-16-C-0114. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, AFRL, NSF, or the U.S. Government."
]
],
"section_name": [
"Introduction",
"Related Work",
"Measuring the Veridicality of Users' Predictions",
"Mechanical Turk Annotation",
"Veridicality Classifier",
"Features",
"Evaluation",
"Performance on held-out event types",
"Error Analysis",
"Forecasting Contest Outcomes",
"Prediction",
"Sentiment Baseline",
"Frequency Baseline",
"Results",
"Surprise Outcomes",
"Assessing the Reliability of Accounts",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1fa3c4fc01ead95eb6969c006e2cce58cd1ff0d6",
"42ebf455c3de2646c0d83615cc7b3793f71e6804",
"52655898c337ec6b3acde9701bc0738cc395791b"
],
"answer": [
{
"evidence": [
"We restricted the data to English tweets only, as tagged by langid.py BIBREF18 . Jaccard similarity was computed between messages to identify and remove duplicates. We removed URLs and preserved only tweets that mention contenders in the text. This automatic post-processing left us with 57,711 tweets for all winners and 55,558 tweets for losers (contenders who did not win) across all events. Table TABREF17 gives the data distribution across event categories."
],
"extractive_spans": [
"English "
],
"free_form_answer": "",
"highlighted_evidence": [
"We restricted the data to English tweets only, as tagged by langid.py BIBREF18 . Jaccard similarity was computed between messages to identify and remove duplicates. We removed URLs and preserved only tweets that mention contenders in the text. This automatic post-processing left us with 57,711 tweets for all winners and 55,558 tweets for losers (contenders who did not win) across all events. Table TABREF17 gives the data distribution across event categories."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We restricted the data to English tweets only, as tagged by langid.py BIBREF18 . Jaccard similarity was computed between messages to identify and remove duplicates. We removed URLs and preserved only tweets that mention contenders in the text. This automatic post-processing left us with 57,711 tweets for all winners and 55,558 tweets for losers (contenders who did not win) across all events. Table TABREF17 gives the data distribution across event categories."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We restricted the data to English tweets only, as tagged by langid.py BIBREF18 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To explore the accuracy of user predictions in social media, we gathered a corpus of tweets that mention events belonging to one of the 10 types listed in Table TABREF17 . Relevant messages were collected by formulating queries to the Twitter search interface that include the name of a contender for a given contest in conjunction with the keyword win. We restricted the time range of the queries to retrieve only messages written before the time of the contest to ensure that outcomes were unknown when the tweets were written. We include 10 days of data before the event for the presidential primaries and the final presidential elections, 7 days for the Oscars, Ballon d'Or and Indian general elections, and the period between the semi-finals and the finals for the sporting events. Table TABREF15 shows several example queries to the Twitter search interface which were used to gather data. We automatically generated queries, using templates, for events scraped from various websites: 483 queries were generated for the presidential primaries based on events scraped from ballotpedia , 176 queries were generated for the Oscars, 18 for Ballon d'Or, 162 for the Eurovision contest, 52 for Tennis Grand Slams, 6 for the Rugby World Cup, 18 for the Cricket World Cup, 12 for the Football World Cup, 76 for the 2016 US presidential elections, and 68 queries for the 2014 Indian general elections.",
"We added an event prefix (e.g., “Oscars\" or the state for presidential primaries), a keyword (“win\"), and the relevant date range for the event. For example, “Oscars Leonardo DiCaprio win since:2016-2-22 until:2016-2-28\" would be the query generated for the first entry in Table TABREF3 ."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"Relevant messages were collected by formulating queries to the Twitter search interface that include the name of a contender for a given contest in conjunction with the keyword win.",
"We added an event prefix (e.g., “Oscars\" or the state for presidential primaries), a keyword (“win\"), and the relevant date range for the event. For example, “Oscars Leonardo DiCaprio win since:2016-2-22 until:2016-2-28\" would be the query generated for the first entry in Table TABREF3 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"13121c3cf7b81793669761c87b8330e3a7d04276",
"45bcb7365424d9413abb1b55a64e409ca5e98eb6"
],
"answer": [
{
"evidence": [
"We model the conditional distribution over a tweet's veridicality toward a candidate INLINEFORM0 winning a contest against a set of opponents, INLINEFORM1 , using a log-linear model: INLINEFORM2",
"where INLINEFORM0 is the veridicality (positive, negative or neutral).",
"To extract features INLINEFORM0 , we first preprocessed tweets retrieved for a specific event to identify named entities, using BIBREF20 's Twitter NER system. Candidate ( INLINEFORM1 ) and opponent entities were identified in the tweet as follows:",
"We use five feature templates: context words, distance between entities, presence of punctuation, dependency paths, and negated keyword."
],
"extractive_spans": [
"log-linear model",
" five feature templates: context words, distance between entities, presence of punctuation, dependency paths, and negated keyword"
],
"free_form_answer": "",
"highlighted_evidence": [
"We model the conditional distribution over a tweet's veridicality toward a candidate INLINEFORM0 winning a contest against a set of opponents, INLINEFORM1 , using a log-linear model: INLINEFORM2\n\nwhere INLINEFORM0 is the veridicality (positive, negative or neutral).",
"To extract features INLINEFORM0 , we first preprocessed tweets retrieved for a specific event to identify named entities, using BIBREF20 's Twitter NER system.",
"We use five feature templates: context words, distance between entities, presence of punctuation, dependency paths, and negated keyword."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The goal of our system, TwiVer, is to automate the annotation process by predicting how veridical a tweet is toward a candidate winning a contest: is the candidate deemed to be winning, or is the author uncertain? For the purpose of our experiments, we collapsed the five labels for veridicality into three: positive veridicality (“Definitely Yes\" and “Probably Yes\"), neutral (“Uncertain about the outcome\") and negative veridicality (“Definitely No\" and “Probably No\").",
"We model the conditional distribution over a tweet's veridicality toward a candidate INLINEFORM0 winning a contest against a set of opponents, INLINEFORM1 , using a log-linear model: INLINEFORM2",
"where INLINEFORM0 is the veridicality (positive, negative or neutral).",
"To extract features INLINEFORM0 , we first preprocessed tweets retrieved for a specific event to identify named entities, using BIBREF20 's Twitter NER system. Candidate ( INLINEFORM1 ) and opponent entities were identified in the tweet as follows:",
"We use five feature templates: context words, distance between entities, presence of punctuation, dependency paths, and negated keyword."
],
"extractive_spans": [],
"free_form_answer": "Veridicality class, log-linear model for measuring distribution over a tweet's veridicality, Twitter NER system to to identify named entities, five feature templates: context words, distance between entities, presence of punctuation, dependency paths, and negated keyword.",
"highlighted_evidence": [
"For the purpose of our experiments, we collapsed the five labels for veridicality into three: positive veridicality (“Definitely Yes\" and “Probably Yes\"), neutral (“Uncertain about the outcome\") and negative veridicality (“Definitely No\" and “Probably No\").\n\nWe model the conditional distribution over a tweet's veridicality toward a candidate INLINEFORM0 winning a contest against a set of opponents, INLINEFORM1 , using a log-linear model: INLINEFORM2\n\nwhere INLINEFORM0 is the veridicality (positive, negative or neutral).\n\nTo extract features INLINEFORM0 , we first preprocessed tweets retrieved for a specific event to identify named entities, using BIBREF20 's Twitter NER system.",
"For the purpose of our experiments, we collapsed the five labels for veridicality into three: positive veridicality (“Definitely Yes\" and “Probably Yes\"), neutral (“Uncertain about the outcome\") and negative veridicality (“Definitely No\" and “Probably No\").\n\nWe model the conditional distribution over a tweet's veridicality toward a candidate INLINEFORM0 winning a contest against a set of opponents, INLINEFORM1 , using a log-linear model: INLINEFORM2\n\nwhere INLINEFORM0 is the veridicality (positive, negative or neutral).\n\nTo extract features INLINEFORM0 , we first preprocessed tweets retrieved for a specific event to identify named entities, using BIBREF20 's Twitter NER system.",
"We use five feature templates: context words, distance between entities, presence of punctuation, dependency paths, and negated keyword."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"340ac386545d09675bea7731ae04dc2834041b16"
],
"answer": [
{
"evidence": [
"The goal of our system, TwiVer, is to automate the annotation process by predicting how veridical a tweet is toward a candidate winning a contest: is the candidate deemed to be winning, or is the author uncertain? For the purpose of our experiments, we collapsed the five labels for veridicality into three: positive veridicality (“Definitely Yes\" and “Probably Yes\"), neutral (“Uncertain about the outcome\") and negative veridicality (“Definitely No\" and “Probably No\")."
],
"extractive_spans": [
"neutral (“Uncertain about the outcome\")"
],
"free_form_answer": "",
"highlighted_evidence": [
" For the purpose of our experiments, we collapsed the five labels for veridicality into three: positive veridicality (“Definitely Yes\" and “Probably Yes\"), neutral (“Uncertain about the outcome\") and negative veridicality (“Definitely No\" and “Probably No\")."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What languages are used as input?",
"What are the components of the classifier?",
"Which uncertain outcomes are forecast using the wisdom of crowds?"
],
"question_id": [
"d9354c0bb32ec037ff2aacfed58d57887a713163",
"c035a011b737b0a10deeafc3abe6a282b389d48b",
"d3fb0d84d763cb38f400b7de3daaa59ed2a1b0ab"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Examples of tweets expressing varying degrees of veridicality toward Natalie Portman winning an Oscar.",
"Table 2: Oscar nominations for Best Actor 2016.",
"Figure 1: Example of one item to be annotated, as displayed to the Turkers.",
"Table 4: Number of tweets for each event category.",
"Table 3: Examples of queries to extract tweets.",
"Figure 2: Heatmaps showing annotation distributions for one of the events - the Oscars and all event types, separating winners from losers. Vertical labels indicate veridicality (DY “Definitely Yes”, PY “Probably Yes”, UC “Uncertain about the outcome”, PN “Probably No” and DN “Definitely No”). Horizontal labels indicate desire (SW “Strongly wants the event to happen”, PW “Probably wants the event to happen”, ND “No desire about the event outcome”, PD “Probably does not want the event to happen”, SN “Strongly against the event happening”). More data in the upper left hand corner indicates there are more tweets with positive veridicality and desire.",
"Figure 3: Illustration of the three named entity tags and distance features between entities and keyword win for a tweet retrieved by the query “Oscars Leonardo DiCaprio win since:2016-2-22 until:2016-2-28”.",
"Figure 4: Precision/Recall curve showing TwiVer performance in identifying positive veridicality tweets in the test data.",
"Table 5: Feature ablation of the positive veridicality classifier by removing each group of features from the full set. The point of maximum F1 score is shown in each case.",
"Table 6: Some high-weight features for positive and negative veridicality.",
"Table 7: Some classification errors made by TwiVer. Contenders queried for are highlighted.",
"Table 8: Performance of Veridicality, Sentiment baseline, and Frequency baseline on all event categories (%).",
"Table 9: F1 scores for each event when training on all events vs. holding out that event from training. |Tt| is the number of tweets of that event category present in the test dataset.",
"Table 10: Top 10 predictions of winners for Oscars and primaries based on veridicality and sentiment scores. Correct predictions are highlighted. “!” indicates a loss which wasn’t expected.",
"Table 11: Positive veridicality tweet counts for the Best Actress category in 2014: |Tc| is the count of positive veridicality tweets for the contender under consideration and |TO| is the count of positive veridicality tweets for the other contenders.",
"Table 12: List of users sorted by how accurate they were in their Ballon d’Or predictions."
],
"file": [
"1-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"3-Table4-1.png",
"3-Table3-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"6-Table7-1.png",
"7-Table8-1.png",
"7-Table9-1.png",
"8-Table10-1.png",
"8-Table11-1.png",
"9-Table12-1.png"
]
} | [
"What languages are used as input?",
"What are the components of the classifier?"
] | [
[
"1707.07212-Measuring the Veridicality of Users' Predictions-2",
"1707.07212-Measuring the Veridicality of Users' Predictions-3"
],
[
"1707.07212-Features-0",
"1707.07212-Veridicality Classifier-2",
"1707.07212-Veridicality Classifier-0",
"1707.07212-Veridicality Classifier-3"
]
] | [
"English",
"Veridicality class, log-linear model for measuring distribution over a tweet's veridicality, Twitter NER system to to identify named entities, five feature templates: context words, distance between entities, presence of punctuation, dependency paths, and negated keyword."
] | 325 |
1810.12897 | Topic-Specific Sentiment Analysis Can Help Identify Political Ideology | Ideological leanings of an individual can often be gauged by the sentiment one expresses about different issues. We propose a simple framework that represents a political ideology as a distribution of sentiment polarities towards a set of topics. This representation can then be used to detect ideological leanings of documents (speeches, news articles, etc.) based on the sentiments expressed towards different topics. Experiments performed using a widely used dataset show the promise of our proposed approach that achieves comparable performance to other methods despite being much simpler and more interpretable. | {
"paragraphs": [
[
"The ideological leanings of a person within the left-right political spectrum are often reflected by how one feels about different topics and by means of preferences among various choices on particular issues. For example, a left-leaning person would prefer nationalization and state control of public services (such as healthcare) where privatization would be often preferred by people that lean towards the right. Likewise, a left-leaning person would often be supportive of immigration and will often talk about immigration in a positive manner citing examples of benefits of immigration on a country's economy. A right-leaning person, on the other hand, will often have a negative opinion about immigration.",
"Most of the existing works on political ideology detection from text have focused on utilizing bag-of-words and other syntactic features to capture variations in language use BIBREF0 , BIBREF1 , BIBREF2 . We propose an alternative mechanism for political ideology detection based on sentiment analysis. We posit that adherents of a political ideology generally have similar sentiment toward specific topics (for example, right wing followers are often positive towards free markets, lower tax rates, etc.) and thus, a political ideology can be represented by a characteristic sentiment distribution over different topics (Section SECREF3 ). This topic-specific sentiment representation of a political ideology can then be used for automatic ideology detection by comparing the topic-specific sentiments as expressed by the content in a document (news article, magazine article, collection of social media posts by a user, utterances in a conversation, etc.).",
"In order to validate our hypothesis, we consider exploiting the sentiment information towards topics from archives of political debates to build a model for identifying political orientation of speakers as one of right or left leaning, which corresponds to republicans and democrats respectively, within the context of US politics. This is inspired by our observation that the political leanings of debators are often expressed in debates by way of speakers' sentiments towards particular topics. Parliamentary or Senate debates often bring the ideological differences to the centre stage, though somewhat indirectly. Heated debates in such forums tend to focus on the choices proposed by the executive that are in sharp conflict with the preference structure of the opposition members. Due to this inherent tendency of parliamentary debates to focus on topics of disagreement, the sentiments exposited in debates hold valuable cues to identify the political orientation of the participants.",
"We develop a simple classification model that uses a topic-specific sentiment summarization for republican and democrat speeches separately. Initial results of experiments conducted using a widely used dataset of US Congress debates BIBREF3 are encouraging and show that this simple model compares well with classification models that employ state-of-the-art distributional text representations (Section SECREF4 )."
],
[
"Political ideology detection has been a relatively new field of research within the NLP community. Most of the previous efforts have focused on capturing the variations in language use in text representing content of different ideologies. Beissmann et al. ideologyPrediction-text employ bag-of-word features for ideology detection in different domains such as speeches in German parliament, party manifestos, and facebook posts. Sim et al. ideological-proportion-speeches use a labeled corpus of political writings to infer lexicons of cues strongly associated with different ideologies. These “ideology lexicons” are then used to analyze political speeches and identify their ideological leanings. Iyyer at al. rnn-ideology recently adopted a recursive neural network architecture to detect ideological bias of single sentences. In addition, topic models have also been used for ideology detection by identifying latent topic distributions across different ideologies BIBREF4 , BIBREF5 . Gerrish and Blei legislativeRollCalls connected text of the legislations to voting patterns of legislators from different parties."
],
[
"Sentiment analysis has proved to be a useful tool in detecting controversial topics as it can help identify topics that evoke different feelings among people on opposite side of the arguments. Mejova et al. controversy-news2 analyzed language use in controversial news articles and found that a writer may choose to highlight the negative aspects of the opposing view rather than emphasizing the positive aspects of one’s view. Lourentzou et al. controversy-news3 utilize the sentiments expressed in social media comments to identify controversial portions of news articles. Given a news article and its associated comments on social media, the paper links comments with each sentence of the article (by using a sentence as a query and retrieving comments using BM25 score). For all the comments associated with a sentence, a sentiment score is then computed, and sentences with large variations in positive and negative comments are identified as controversial sentences. Choi et al. controversy-news go one step further and identify controversial topics and their sub-topics in news articles."
],
[
"Let INLINEFORM0 be a corpus of political documents such as speeches or social media postings. Let INLINEFORM1 be the set of ideology class labels. Typical scenarios would just have two class labels (i.e., INLINEFORM2 ), but we will outline our formulation for a general case. For document INLINEFORM3 , INLINEFORM4 denotes the class label for that document. Our method relies on the usage of topics, each of which are most commonly represented by a probability distribution over the vocabulary. The set of topics over INLINEFORM5 , which we will denote using INLINEFORM6 , may be identified using a topic modeling method such as LDA BIBREF6 unless a pre-defined set of handcrafted topics is available.",
"Given a document INLINEFORM0 and a topic INLINEFORM1 , our method relies on identifying the sentiment as expressed by content in INLINEFORM2 towards the topic INLINEFORM3 . The sentiment could be estimated in the form of a categorical label such as one of positive, negative and neutral BIBREF7 . Within our modelling, however, we adopt a more fine-grained sentiment labelling, whereby the sentiment for a topic-document pair is a probability distribution over a plurality of ordinal polarity classes ranging from strongly positive to strongly negative. Let INLINEFORM4 represent the topic-sentiment polarity vector of INLINEFORM5 towards INLINEFORM6 such that INLINEFORM7 represents the probability of the polarity class INLINEFORM8 . Combining the topic-sentiment vectors for all topics yields a document-specific topic-sentiment matrix (TSM) as follows: DISPLAYFORM0 ",
"Each row in the matrix corresponds to a topic within INLINEFORM0 , with each element quantifying the probability associated with the sentiment polarity class INLINEFORM1 for the topic INLINEFORM2 within document INLINEFORM3 . The topic-sentiment matrix above may be regarded as a sentiment signature for the document over the topic set INLINEFORM4 ."
],
[
"In constructing TSMs, we make use of topic-specific sentiment estimations as outlined above. Typical sentiment analysis methods (e.g., NLTK Sentiment Analysis) are designed to determine the overall sentiment for a text segment. Using such sentiment analysis methods in order to determine topic-specific sentiments is not necessarily straightforward. We adopt a simple keyword based approach for the task. For every document-topic pair INLINEFORM0 , we extract the sentences from INLINEFORM1 that contain at least one of the top- INLINEFORM2 keywords associated with the topic INLINEFORM3 . We then collate the sentences in the order in which they appear in INLINEFORM4 and form a mini-document INLINEFORM5 . This document INLINEFORM6 is then passed on to a conventional sentiment analyzer that would then estimate the sentiment polarity as a probability distribution over sentiment polarity classes, which then forms the INLINEFORM7 vector. We use INLINEFORM8 and the RNN based sentiment analyzer BIBREF8 in our method."
],
[
"We now outline a simple classification model that uses summaries of TSMs. Given a labeled training set of documents, we would like to find the prototypical TSM corresponding to each label. This can be done by identifying the matrix that minimizes the cumulative deviation from those corresponding to the documents with the label. DISPLAYFORM0 ",
"where INLINEFORM0 denotes the Frobenius norm. It turns out that such a label-specific signature matrix is simply the mean of the topic-sentiment matrices corresponding to documents that bear the respective label, which may be computed using the below equation. DISPLAYFORM0 ",
"For an unseen (test) document INLINEFORM0 , we first compute the TSM INLINEFORM1 , and assign it the label corresponding to the label whose TSM is most proximal to INLINEFORM2 . DISPLAYFORM0 "
],
[
"In two class scenarios with label such as INLINEFORM0 or INLINEFORM1 as we have in our dataset, TSMs can be flattened into a vector and fed into a logistic regression classifier that learns weights - i.e., co-efficients for each topic + sentiment polarity class combination. These weights can then be used to estimate the label by applying it to the new document's TSM."
],
[
"We used the publicly available Convote dataset BIBREF3 for our experiments. The dataset provides transcripts of debates in the House of Representatives of the U.S Congress for the year 2005. Each file in the dataset corresponds to a single, uninterrupted utterance by a speaker in a given debate. We combine all the utterances of a speaker in a given debate in a single file to capture different opinions/view points of the speaker about the debate topic. We call this document the view point document (VPD) representing the speaker's opinion about different aspects of the issue being debated. The dataset also provides political affiliations of all the speakers – Republican (R), Democrat (D), and Independent (I). With there being only six documents for the independent class (four in training, two in test), we excluded them from our evaluation. Table TABREF15 summarizes the statistics about the dataset and distribution of different classes. We obtained 50 topics using LDA from Mallet run over the training dataset. The topic-sentiment matrix was obtained using the Stanford CoreNLP sentiment API BIBREF9 which provides probability distributions over a set of five sentiment polarity classes."
],
[
"In order to evaluate our proposed TSM-based methods - viz., nearest class (NC) and logistic regression (LR) - we use the following methods in our empirical evaluation.",
"GloVe-d2v: We use pre-trained GloVe BIBREF10 word embeddings to compute vector representation of each VPD by averaging the GloVe vectors for all words in the document. A logistic regression classifier is then trained on the vector representations thus obtained.",
"GloVe-d2v+TSM: A logistic regression classifier trained on the GloVe features as well as TSM features."
],
[
"Table TABREF20 reports the classification results for different methods described above. TSM-NC, the method that uses the INLINEFORM0 vectors and performs simple nearest class classification achieves an overall accuracy of INLINEFORM1 . Next, training a logistic regression classifier trained on INLINEFORM2 vectors as features, TSM-LR, achieves significant improvement with an overall accuracy of INLINEFORM3 . The word embedding based baseline, the GloVe-d2v method, achieves slightly lower performance with an overall accuracy of INLINEFORM4 . However, we do note that the per-class performance of GloVe-d2v method is more balanced with about INLINEFORM5 accuracy for both classes. The TSM-LR method on the other hand achieves about INLINEFORM6 for INLINEFORM7 class and only INLINEFORM8 for the INLINEFORM9 class. The results obtained are promising and lend weight to out hypothesis that ideological leanings of a person can be identified by using the fine-grained sentiment analysis of the viewpoint a person has towards different underlying topics."
],
[
"Towards analyzing the significance of the results, we would like to start with drawing attention to the format of the data used in the TSM methods. The document-specific TSM matrices do not contain any information about the topics themselves, but only about the sentiment in the document towards each topic; one may recollect that INLINEFORM0 is a quantification of the strength of the sentiment in INLINEFORM1 towards topic INLINEFORM2 . Thus, in contrast to distributional embeddings such as doc2vec, TSMs contain only the information that directly relates to sentiment towards specific topics that are learnt from across the corpus. The results indicate that TSM methods are able to achieve comparable performance to doc2vec-based methods despite usage of only a small slice of informatiom. This points to the importance of sentiment information in determining the political leanings from text. We believe that leveraging TSMs along with distributional embeddings in a manner that can combine the best of both views would improve the state-of-the-art of political ideology detection.",
"Next, we also studied if there are topics that are more polarizing than others and how different topics impact classification performance. We identified polarizing topics, i.e, topics that invoke opposite sentiments across two classes (ideologies) by using the following equation. DISPLAYFORM0 ",
"Here, INLINEFORM0 represent the sentiment vectors for topic INLINEFORM1 for republican and democrat classes. Note that these sentiment vectors are the rows corresponding to topic INLINEFORM2 in TSMs for the two classes, respectively.",
"Table TABREF21 lists the top five topics with most distance, i.e., most polarizing topics (top) and five topics with least distance, i.e.,least polarizing topics (bottom) as computed by equation EQREF23 . Note that the topics are represented using the top keywords that they contain according to the probability distribution of the topic. We observe that the most polarizing topics include topics related to healthcare (H3, H4), military programs (H5), and topics related to administration processes (H1 and H2). The least polarizing topics include topics related to worker safety (L3) and energy projects (L2). One counter-intuitive observation is topic related to gun control (L4) that is amongst the least polarizing topics. This anomaly could be attributed to only a few speeches related to this issue in the training set (only 23 out of 1175 speeches mention gun) that prevents a reliable estimate of the probability distributions. We observed similar low occurrences of other lower distance topics too indicating the potential for improvements in computation of topic-specific sentiment representations with more data. In fact, performing the nearest neighbor classification INLINEFORM0 with only top-10 most polarizing topics led to improvements in classification accuracy from INLINEFORM1 to INLINEFORM2 suggesting that with more data, better INLINEFORM3 representations could be learned that are better at discriminating between different ideologies."
],
[
"We proposed to exploit topic-specific sentiment analysis for the task of automatic ideology detection from text. We described a simple framework for representing political ideologies and documents as a matrix capturing sentiment distributions over topics and used this representation for classifying documents based on their topic-sentiment signatures. Empirical evaluation over a widely used dataset of US Congressional speeches showed that the proposed approach performs on a par with classifiers using distributional text representations. In addition, the proposed approach offers simplicity and easy interpretability of results making it a promising technique for ideology detection. Our immediate future work will focus on further solidifying our observations by using a larger dataset to learn better TSMs for different ideologies. Further, the framework easily lends itself to be used for detecting ideological leanings of authors, social media users, news websites, magazines, etc. by computing their TSMs and comparing against the TSMs of different ideologies."
],
[
"We would like to thank the anonymous reviewers for their valuable comments and suggestions that helped us improve the quality of this work."
]
],
"section_name": [
"Introduction",
"Political Ideology Detection",
"Sentiment Analysis for Controversy Detection",
"Using Topic Sentiments for Ideology Detection",
"Determining Topic-specific Sentiments",
"Nearest TSM Classification",
"Logistic Regression Classification",
"Dataset",
"Methods",
"Results",
"Discussion",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1ef30fec40e42f87ad2ab3871a7a9038a1417d18",
"6e2d4d8cc01ae8d679a8e587e7e9c9cf0372765b"
],
"answer": [
{
"evidence": [
"We used the publicly available Convote dataset BIBREF3 for our experiments. The dataset provides transcripts of debates in the House of Representatives of the U.S Congress for the year 2005. Each file in the dataset corresponds to a single, uninterrupted utterance by a speaker in a given debate. We combine all the utterances of a speaker in a given debate in a single file to capture different opinions/view points of the speaker about the debate topic. We call this document the view point document (VPD) representing the speaker's opinion about different aspects of the issue being debated. The dataset also provides political affiliations of all the speakers – Republican (R), Democrat (D), and Independent (I). With there being only six documents for the independent class (four in training, two in test), we excluded them from our evaluation. Table TABREF15 summarizes the statistics about the dataset and distribution of different classes. We obtained 50 topics using LDA from Mallet run over the training dataset. The topic-sentiment matrix was obtained using the Stanford CoreNLP sentiment API BIBREF9 which provides probability distributions over a set of five sentiment polarity classes.",
"FLOAT SELECTED: Table 3: List of most polarizing (top) and least polarizing (bottom) topics as computed using equation 5."
],
"extractive_spans": [
"We obtained 50 topics using LDA"
],
"free_form_answer": "",
"highlighted_evidence": [
"We obtained 50 topics using LDA from Mallet run over the training dataset. The topic-sentiment matrix was obtained using the Stanford CoreNLP sentiment API BIBREF9 which provides probability distributions over a set of five sentiment polarity classes.",
"FLOAT SELECTED: Table 3: List of most polarizing (top) and least polarizing (bottom) topics as computed using equation 5."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF21 lists the top five topics with most distance, i.e., most polarizing topics (top) and five topics with least distance, i.e.,least polarizing topics (bottom) as computed by equation EQREF23 . Note that the topics are represented using the top keywords that they contain according to the probability distribution of the topic. We observe that the most polarizing topics include topics related to healthcare (H3, H4), military programs (H5), and topics related to administration processes (H1 and H2). The least polarizing topics include topics related to worker safety (L3) and energy projects (L2). One counter-intuitive observation is topic related to gun control (L4) that is amongst the least polarizing topics. This anomaly could be attributed to only a few speeches related to this issue in the training set (only 23 out of 1175 speeches mention gun) that prevents a reliable estimate of the probability distributions. We observed similar low occurrences of other lower distance topics too indicating the potential for improvements in computation of topic-specific sentiment representations with more data. In fact, performing the nearest neighbor classification INLINEFORM0 with only top-10 most polarizing topics led to improvements in classification accuracy from INLINEFORM1 to INLINEFORM2 suggesting that with more data, better INLINEFORM3 representations could be learned that are better at discriminating between different ideologies."
],
"extractive_spans": [],
"free_form_answer": "debate topics such as healthcare, military programs, administration processes, worker safety, energy projects, gun control.",
"highlighted_evidence": [
"Table TABREF21 lists the top five topics with most distance, i.e., most polarizing topics (top) and five topics with least distance, i.e.,least polarizing topics (bottom) as computed by equation EQREF23 . Note that the topics are represented using the top keywords that they contain according to the probability distribution of the topic. We observe that the most polarizing topics include topics related to healthcare (H3, H4), military programs (H5), and topics related to administration processes (H1 and H2). The least polarizing topics include topics related to worker safety (L3) and energy projects (L2). One counter-intuitive observation is topic related to gun control (L4) that is amongst the least polarizing topics. This anomaly could be attributed to only a few speeches related to this issue in the training set (only 23 out of 1175 speeches mention gun) that prevents a reliable estimate of the probability distributions. We observed similar low occurrences of other lower distance topics too indicating the potential for improvements in computation of topic-specific sentiment representations with more data. In fact, performing the nearest neighbor classification INLINEFORM0 with only top-10 most polarizing topics led to improvements in classification accuracy from INLINEFORM1 to INLINEFORM2 suggesting that with more data, better INLINEFORM3 representations could be learned that are better at discriminating between different ideologies."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"dc25e8ecb9795d0e82c4e89b870ee68b9385409a",
"1ef1065bbc7d79c3180f391e80d3253e17d0dc5f"
],
"answer": [
{
"evidence": [
"In order to evaluate our proposed TSM-based methods - viz., nearest class (NC) and logistic regression (LR) - we use the following methods in our empirical evaluation.",
"GloVe-d2v: We use pre-trained GloVe BIBREF10 word embeddings to compute vector representation of each VPD by averaging the GloVe vectors for all words in the document. A logistic regression classifier is then trained on the vector representations thus obtained."
],
"extractive_spans": [
"We use pre-trained GloVe BIBREF10 word embeddings to compute vector representation of each VPD by averaging the GloVe vectors for all words in the document. A logistic regression classifier is then trained on the vector representations thus obtained."
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to evaluate our proposed TSM-based methods - viz., nearest class (NC) and logistic regression (LR) - we use the following methods in our empirical evaluation.\n\nGloVe-d2v: We use pre-trained GloVe BIBREF10 word embeddings to compute vector representation of each VPD by averaging the GloVe vectors for all words in the document. A logistic regression classifier is then trained on the vector representations thus obtained."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to evaluate our proposed TSM-based methods - viz., nearest class (NC) and logistic regression (LR) - we use the following methods in our empirical evaluation.",
"GloVe-d2v: We use pre-trained GloVe BIBREF10 word embeddings to compute vector representation of each VPD by averaging the GloVe vectors for all words in the document. A logistic regression classifier is then trained on the vector representations thus obtained."
],
"extractive_spans": [
"GloVe-d2v"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to evaluate our proposed TSM-based methods - viz., nearest class (NC) and logistic regression (LR) - we use the following methods in our empirical evaluation.\n\nGloVe-d2v: We use pre-trained GloVe BIBREF10 word embeddings to compute vector representation of each VPD by averaging the GloVe vectors for all words in the document. A logistic regression classifier is then trained on the vector representations thus obtained."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"466d6bd466d6f8e6d60c283a1883e932830ee233",
"d3044d7a550689e5194c6d0384b937ce88b0320d"
],
"answer": [
{
"evidence": [
"We used the publicly available Convote dataset BIBREF3 for our experiments. The dataset provides transcripts of debates in the House of Representatives of the U.S Congress for the year 2005. Each file in the dataset corresponds to a single, uninterrupted utterance by a speaker in a given debate. We combine all the utterances of a speaker in a given debate in a single file to capture different opinions/view points of the speaker about the debate topic. We call this document the view point document (VPD) representing the speaker's opinion about different aspects of the issue being debated. The dataset also provides political affiliations of all the speakers – Republican (R), Democrat (D), and Independent (I). With there being only six documents for the independent class (four in training, two in test), we excluded them from our evaluation. Table TABREF15 summarizes the statistics about the dataset and distribution of different classes. We obtained 50 topics using LDA from Mallet run over the training dataset. The topic-sentiment matrix was obtained using the Stanford CoreNLP sentiment API BIBREF9 which provides probability distributions over a set of five sentiment polarity classes."
],
"extractive_spans": [
"Convote dataset BIBREF3"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the publicly available Convote dataset BIBREF3 for our experiments."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We develop a simple classification model that uses a topic-specific sentiment summarization for republican and democrat speeches separately. Initial results of experiments conducted using a widely used dataset of US Congress debates BIBREF3 are encouraging and show that this simple model compares well with classification models that employ state-of-the-art distributional text representations (Section SECREF4 ).",
"We used the publicly available Convote dataset BIBREF3 for our experiments. The dataset provides transcripts of debates in the House of Representatives of the U.S Congress for the year 2005. Each file in the dataset corresponds to a single, uninterrupted utterance by a speaker in a given debate. We combine all the utterances of a speaker in a given debate in a single file to capture different opinions/view points of the speaker about the debate topic. We call this document the view point document (VPD) representing the speaker's opinion about different aspects of the issue being debated. The dataset also provides political affiliations of all the speakers – Republican (R), Democrat (D), and Independent (I). With there being only six documents for the independent class (four in training, two in test), we excluded them from our evaluation. Table TABREF15 summarizes the statistics about the dataset and distribution of different classes. We obtained 50 topics using LDA from Mallet run over the training dataset. The topic-sentiment matrix was obtained using the Stanford CoreNLP sentiment API BIBREF9 which provides probability distributions over a set of five sentiment polarity classes."
],
"extractive_spans": [
"Convote dataset BIBREF3"
],
"free_form_answer": "",
"highlighted_evidence": [
"Initial results of experiments conducted using a widely used dataset of US Congress debates BIBREF3 are encouraging and show that this simple model compares well with classification models that employ state-of-the-art distributional text representations (Section SECREF4 ).",
"We used the publicly available Convote dataset BIBREF3 for our experiments. The dataset provides transcripts of debates in the House of Representatives of the U.S Congress for the year 2005. Each file in the dataset corresponds to a single, uninterrupted utterance by a speaker in a given debate. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What set topics are looked at?",
"What were the baselines?",
"Which widely used dataset did the authors use?"
],
"question_id": [
"6da1320fa25b2b6768358d3233a5ecf99cc73db5",
"351f7b254e80348221e0654478663a5e53d3fe65",
"d323f0d65b57b30ae85fb9f24298927a3d1216e9"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Distribution of different classes in the ConVote dataset.",
"Table 2: Results achieved by different methods on the ideology classification task.",
"Table 3: List of most polarizing (top) and least polarizing (bottom) topics as computed using equation 5."
],
"file": [
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png"
]
} | [
"What set topics are looked at?"
] | [
[
"1810.12897-Dataset-0",
"1810.12897-6-Table3-1.png",
"1810.12897-Discussion-3"
]
] | [
"debate topics such as healthcare, military programs, administration processes, worker safety, energy projects, gun control."
] | 326 |
1611.01884 | AC-BLSTM: Asymmetric Convolutional Bidirectional LSTM Networks for Text Classification | Recently deeplearning models have been shown to be capable of making remarkable performance in sentences and documents classification tasks. In this work, we propose a novel framework called AC-BLSTM for modeling sentences and documents, which combines the asymmetric convolution neural network (ACNN) with the Bidirectional Long Short-Term Memory network (BLSTM). Experiment results demonstrate that our model achieves state-of-the-art results on five tasks, including sentiment analysis, question type classification, and subjectivity classification. In order to further improve the performance of AC-BLSTM, we propose a semi-supervised learning framework called G-AC-BLSTM for text classification by combining the generative model with AC-BLSTM. | {
"paragraphs": [
[
"Deep neural models recently have achieved remarkable results in computer vision BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , and a range of NLP tasks such as sentiment classification BIBREF4 , BIBREF5 , BIBREF6 , and question-answering BIBREF7 . Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) especially Long Short-term Memory Network (LSTM), are used wildly in natural language processing tasks. With increasing datas, these two methods can reach considerable performance by requiring only limited domain knowledge and easy to be finetuned to specific applications at the same time.",
"CNNs, which have the ability of capturing local correlations of spatial or temporal structures, have achieved excellent performance in computer vision and NLP tasks. And recently the emerge of some new techniques, such as Inception module BIBREF8 , Batchnorm BIBREF9 and Residual Network BIBREF3 have also made the performance even better. For sentence modeling, CNNs perform excellently in extracting n-gram features at different positions of a sentence through convolutional filters.",
"RNNs, with the ability of handling sequences of any length and capturing long-term dependencies, , have also achieved remarkable results in sentence or document modeling tasks. LSTMs BIBREF10 were designed for better remembering and memory accesses, which can also avoid the problem of gradient exploding or vanishing in the standard RNN. Be capable of incorporating context on both sides of every position in the input sequence, BLSTMs introduced in BIBREF11 , BIBREF12 have reported to achieve great performance in Handwriting Recognition BIBREF13 , and Machine Translation BIBREF14 tasks.",
"Generative adversarial networks (GANs) BIBREF15 are a class of generative models for learning how to produce images. Basically, GANs consist of a generator G and a discriminator D, which are trained based on game theory. G maps a input noise vector to an output image, while D takes in an image then outputs a prediction whether the input image is a sample generated by G. Recently, applications of GANs have shown that they can generate promising results BIBREF16 , BIBREF17 . Several recent papers have also extended GANs to the semi-supervised context BIBREF18 , BIBREF19 by simply increasing the dimension of the classifier output from INLINEFORM0 to INLINEFORM1 , which the samples of the extra class are generated by G.",
"In this paper, We proposed an end-to-end architecture named AC-BLSTM by combining the ACNN with the BLSTM for sentences and documents modeling. In order to make the model deeper, instead of using the normal convolution, we apply the technique proposed in BIBREF8 which employs a INLINEFORM0 convolution followed by a INLINEFORM1 convolution by spatial factorizing the INLINEFORM2 convolution. And we use the pretrained word2vec vectors BIBREF20 as the ACNN input, which were trained on 100 billion words of Google News to learn the higher-level representations of n-grams. The outputs of the ACNN are organized as the sequence window feature to feed into the multi-layer BLSTM. So our model does not rely on any other extra domain specific knowledge and complex preprocess, e.g. word segmentation, part of speech tagging and so on. We evaluate AC-BLSTM on sentence-level and document-level tasks including sentiment analysis, question type classification, and subjectivity classification. Experimental results demonstrate the effectiveness of our approach compared with other state-of-the-art methods. Further more, inspired by the ideas of extending GANs to the semi-supervised learning context by BIBREF18 , BIBREF19 , we propose a semi-supervised learning framework for text classification which further improve the performance of AC-BLSTM.",
"The rest of the paper is organized as follows. Section 2 presents a brief review of related work. Section 3 discusses the architecture of our AC-BLSTM and our semi-supervised framework. Section 4 presents the experiments result with comparison analysis. Section 5 concludes the paper."
],
[
"Deep learning models have made remarkable progress in various NLP tasks recently. For example, word embeddings BIBREF20 , BIBREF21 , question answearing BIBREF7 , sentiment analysis BIBREF22 , BIBREF23 , BIBREF24 , machine translation BIBREF25 and so on. CNNs and RNNs are two wildly used architectures among these models. The success of deep learning models for NLP mostly relates to the progress in learning distributed word representations BIBREF20 , BIBREF21 . In these mothods, instead of using one-hot vectors by indexing words into a vocabulary, each word is modeled as a low dimensional and dense vector which encodes both semantic and syntactic information of words.",
"Our model mostly relates to BIBREF4 which combines CNNs of different filter lengths and either static or fine-tuned word vectors, and BIBREF5 which stacks CNN and LSTM in a unified architecture with static word vectors. It is known that in computer vision, the deeper network architecture usually possess the better performance. We consider NLP also has this property. In order to make our model deeper, we apply the idea of asymmetric convolution introduced in BIBREF8 , which can reduce the number of the parameters, and increase the representation ability of the model by adding more nonlinearity. Then we stack the multi-layer BLSTM, which is cable of analysing the future as well as the past of every position in the sequence, on top of the ACNN. The experiment results also demonstrate the effectiveness of our model."
],
[
"In this section, we will introduce our AC-BLSTM architecture in detail. We first describe the ACNN which takes the word vector represented matrix of the sentence as input and produces higher-level presentation of word features. Then we introduce the BLSTM which can incorporate context on both sides of every position in the input sequence. Finally, we introduce the techniques to avoid overfitting in our model. An overall illustration of our architecture is shown in Figure FIGREF1 ."
],
[
"Let x INLINEFORM0 be the INLINEFORM1 -dimensional word vector corresponding to the INLINEFORM2 -th word in the sentence and INLINEFORM3 be the maximum length of the sentence in the dataset. Then the sentence with length INLINEFORM4 is represented as DISPLAYFORM0 ",
"For those sentences that are shorter than INLINEFORM0 , we simply pad them with space.",
"In general, let INLINEFORM0 in which INLINEFORM1 be the length of convolution filter. Then instead of employing the INLINEFORM2 convolution operation described in BIBREF4 , BIBREF5 , we apply the asymmetric convolution operation inspired by BIBREF8 to the input matrix which factorize the INLINEFORM3 convolution into INLINEFORM4 convolution followed by a INLINEFORM5 convolution. And in experiments, we found that employ this technique can imporve the performance. The following part of this subsection describe how we define the asymmetric convolution layer.",
"First, the convolution operation corresponding to the INLINEFORM0 convolution with filter w INLINEFORM1 is applied to each word x INLINEFORM2 in the sentence and generates corresponding feature m INLINEFORM3 DISPLAYFORM0 ",
"where INLINEFORM0 is element-wise multiplication, INLINEFORM1 is a bias term and INLINEFORM2 is a non-linear function such as the sigmoid, hyperbolic tangent, etc. In our case, we choose ReLU BIBREF26 as the nonlinear function. Then we get the feature map m INLINEFORM3 DISPLAYFORM0 ",
"After that, the second convolution operation of the asymmetric convolution layer corresponding to the INLINEFORM0 convolution with filter w INLINEFORM1 is applied to a window of INLINEFORM2 features in the feature map m INLINEFORM3 to produce the new feature c INLINEFORM4 and the feature map c INLINEFORM5 DISPLAYFORM0 DISPLAYFORM1 ",
"with c INLINEFORM0 . Where INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are the same as described above.",
"As shown in Figure FIGREF1 , we simultaneously apply three asymmetric convolution layers to the input matrix, which all have the same number of filters denoted as INLINEFORM0 . Thus the output of the asymmetric convolution layer has INLINEFORM1 feature maps. To generate the input sequence of the BLSTM, for each output sequence of the second convolution operation in the aysmmetric convolution layer, we slice the feature maps by channel then obtained sequence of INLINEFORM2 new features c INLINEFORM3 where INLINEFORM4 . Then we concatanate c INLINEFORM5 , c INLINEFORM6 and c INLINEFORM7 to get the input feature for each time step DISPLAYFORM0 ",
"where INLINEFORM0 for INLINEFORM1 and INLINEFORM2 . In general, those c INLINEFORM3 where INLINEFORM4 and INLINEFORM5 must be dropped in order to maintain the same sequence length, which will cause the loss of some information. In our model, instead of simply cutting the sequence, we use a simple trick to obtain the same sequence length without losing the useful information as shown in Figure FIGREF2 . For each output sequence INLINEFORM6 obtained from the second convolution operation with filter length INLINEFORM7 , we take those c INLINEFORM8 where INLINEFORM9 then apply a fullyconnected layer to get a new feature, which has the same dimension of c INLINEFORM10 , to replace the ( INLINEFORM11 +1)-th feature in the origin sequence."
],
[
"First introduced in BIBREF10 and shown as a successful model recently, LSTM is a RNN architecture specifically designed to bridge long time delays between relevant input and target events, making it suitable for problems where long range context is required, such as handwriting recognition, machine translation and so on.",
"For many sequence processing tasks, it is useful to analyze the future as well as the past of a given point in the series. Whereas standard RNNs make use of previous context only, BLSTM BIBREF11 is explicitly designed for learning long-term dependencies of a given point on both side, which has also been shown to outperform other neural network architectures in framewise phoneme recognition BIBREF12 .",
"Therefore we choose BLSTM on top of the ACNN to learn such dependencies given the sequence of higher-level features. And single layer BLSTM can extend to multi-layer BLSTM easily. Finally, we concatenate all hidden state of all the time step of BLSTM, or concatenate the last layer of all the time step hidden state of multi-layer BLSTM, to obtain final representation of the text and we add a softmax layer on top of the model for classification."
],
[
"Our semi-supervised text classification framewrok is inspired by works BIBREF18 , BIBREF19 . We assume the original classifier classify a sample into one of INLINEFORM0 possible classes. So we can do semi-supervised learning by simply adding samples from a generative network G to our dataset and labeling them to an extra class INLINEFORM1 . And correspondingly the dimension of our classifier output increases from INLINEFORM2 to INLINEFORM3 . The configuration of our generator network G is inspired by the architecture proposed in BIBREF16 . And we modify the architecture to make it suitable to the text classification tasks. Table TABREF13 shows the configuration of each layer in the generator G. Lets assume the training batch size is INLINEFORM4 and the percentage of the generated samples among a batch training samples is INLINEFORM5 . At each iteration of the training process, we first generate INLINEFORM6 samples from the generator G then we draw INLINEFORM7 samples from the real dataset. We then perform gradient descent on the AC-BLSTM and generative net G and finally update the parameters of both nets."
],
[
"For model regularization, we employ two commonly used techniques to prevent overfitting during training: dropout BIBREF27 and batch normalization BIBREF9 . In our model, we apply dropout to the input feature of the BLSTM, and the output of BLSTM before the softmax layer. And we apply batch normalization to outputs of each convolution operation just before the relu activation. During training, after we get the gradients of the AC-BLSTM network, we first calculate the INLINEFORM0 INLINEFORM1 of all gradients and sum together to get INLINEFORM2 . Then we compare the INLINEFORM3 to 0.5. If the INLINEFORM4 is greater than 0.5, we let all the gradients multiply with INLINEFORM5 , else just use the original gradients to update the weights."
],
[
"We evaluate our model on various benchmarks. Stanford Sentiment Treebank (SST) is a popular sentiment classification dataset introduced by BIBREF33 . The sentences are labeled in a fine-grained way (SST-1): very negative, negative, neutral, positive, very positive. The dataset has been split into 8,544 training, 1,101 validation, and 2,210 testing sentences. By removing the neutral sentences, SST can also be used for binary classification (SST-2), which has been split into 6,920 training, 872 validation, and 1,821 testing. Since the data is provided in the format of sub-sentences, we train the model on both phrases and sentences but only test on the sentences as in several previous works BIBREF33 , BIBREF6 .",
"Movie Review Data (MR) proposed by BIBREF34 is another dataset for sentiment analysis of movie reviews. The dataset consists of 5,331 positive and 5,331 negative reviews, mostly in one sentence. We follow the practice of using 10-fold cross validation to report the result.",
"Furthermore, we apply AC-BLSTM on the subjectivity classification dataset (SUBJ) released by BIBREF35 . The dataset contains 5,000 subjective sentences and 5,000 objective sentences. We also follow the practice of using 10-fold cross validation to report the result.",
"We also benchmark our system on question type classification task (TREC) BIBREF36 , where sentences are questions in the following 6 classes: abbreviation, human, entity, description, location, numeric. The entire dataset consists of 5,452 training examples and 500 testing examples.",
"For document-level dataset, we use the sentiment classification dataset Yelp 2013 (YELP13) with user and product information, which is built by BIBREF22 . The dataset has been split into 62,522 training, 7,773 validation, and 8,671 testing documents. But in the experiment, we neglect the user and product information to make it consistent with the above experiment settings."
],
[
"We implement our model based on Mxnet BIBREF37 - a C++ library, which is a deep learning framework designed for both efficiency and flexibility. In order to benefit from the efficiency of parallel computation of the tensors, we train our model on a Nvidia GTX 1070 GPU. Training is done through stochastic gradient descent over shuffled mini-batches with the optimizer RMSprop BIBREF38 . For all experiments, we simultaneously apply three asymmetric convolution operation with the second filter length INLINEFORM0 of 2, 3, 4 to the input, set the dropout rate to 0.5 before feeding the feature into BLSTM, and set the initial learning rate to 0.0001. But there are some hyper-parameters that are not the same for all datasets, which are listed in table TABREF14 . We conduct experiments on 3 datasets (MR, SST and SUBJ) to verify the effectiveness our semi-supervised framework. And the setting of INLINEFORM1 and INLINEFORM2 for different datasets are listed in table TABREF15 ."
],
[
"We use the publicly available word2vec vectors that were trained on 100 billion words from Google News. The vectors have dimensionality of 300 and were trained using the continuous bag-of-words architecture BIBREF20 . Words not present in the set of pre-trained words are initialized from the uniform distribution [-0.25, 0.25]. We fix the word vectors and learn only the other parameters of the model during training."
],
[
"We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation. We repeated each experiment 10 times and report the mean accuracy. Results of our models against other methods are listed in table TABREF16 . To the best of our knowledge, AC-BLSTM achieves the best results on five tasks.",
"Compared to methods BIBREF4 and BIBREF5 , which inspired our model mostly, AC-BLSTM can achieve better performance which show that deeper model actually has better performance. By just employing the word2vec vectors, our model can achieve better results than BIBREF30 which combines multiple word embedding methods such as word2vec BIBREF20 , glove BIBREF21 and Syntactic embedding. And the AC-BLSTM performs better when trained with the semi-supervised framework, which proves the success of combining the generative net with AC-BLSTM.",
"The experiment results show that the number of the convolution filter and the lstm memory dimension should keep the same for our model. Also the configuration of hyper-parameters: number of the convolution filter, the lstm memory dimension and the lstm layer are quiet stable across datasets. If the task is simple, e.g. TREC, we just set number of convolution filter to 100, lstm memory dimension to 100 and lstm layer to 1. And as the task becomes complicated, we simply increase the lstm layer from 1 to 4. The SST-2 is a special case, we find that if we set the number of convolution filter and lstm memory dimension to 300 can get better result. And the dropout rate before softmax need to be tuned."
],
[
"In this paper we have proposed AC-BLSTM: a novel framework that combines asymmetric convolutional neural network with bidirectional long short-term memory network. The asymmetric convolutional layers are able to learn phrase-level features. Then output sequences of such higher level representations are fed into the BLSTM to learn long-term dependencies of a given point on both side. To the best of our knowledge, the AC-BLSTM model achieves top performance on standard sentiment classification, question classification and document categorization tasks. And then we proposed a semi-supervised framework for text classification which further improve the performance of AC-BLSTM. In future work, we plan to explore the combination of multiple word embeddings which are described in BIBREF30 .",
"2pt "
]
],
"section_name": [
"Introduction",
"Related Work",
"AC-BLSTM Model",
"Asymmetric Convolution",
"Bidirectional Long Short-Term Memory Network",
"Semi-supervised Framework",
"Regularization",
"Datasets",
"Training and Implementation Details",
"Word Vector Initialization",
"Results and Discussion",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"464b4721baa4c17da1bf47d8d306b99653de6755",
"9130ae59aa0fd46d9ca0a4b7724267cc06a6ee6c"
],
"answer": [
{
"evidence": [
"Our semi-supervised text classification framewrok is inspired by works BIBREF18 , BIBREF19 . We assume the original classifier classify a sample into one of INLINEFORM0 possible classes. So we can do semi-supervised learning by simply adding samples from a generative network G to our dataset and labeling them to an extra class INLINEFORM1 . And correspondingly the dimension of our classifier output increases from INLINEFORM2 to INLINEFORM3 . The configuration of our generator network G is inspired by the architecture proposed in BIBREF16 . And we modify the architecture to make it suitable to the text classification tasks. Table TABREF13 shows the configuration of each layer in the generator G. Lets assume the training batch size is INLINEFORM4 and the percentage of the generated samples among a batch training samples is INLINEFORM5 . At each iteration of the training process, we first generate INLINEFORM6 samples from the generator G then we draw INLINEFORM7 samples from the real dataset. We then perform gradient descent on the AC-BLSTM and generative net G and finally update the parameters of both nets."
],
"extractive_spans": [],
"free_form_answer": "On each step, a generative network is used to generate samples, then a classifier labels them to an extra class. A mix of generated data and real data is combined into a batch, then a gradient descent is performed on the batch, and the parameters are updated.",
"highlighted_evidence": [
"Our semi-supervised text classification framewrok is inspired by works BIBREF18 , BIBREF19 . We assume the original classifier classify a sample into one of INLINEFORM0 possible classes. So we can do semi-supervised learning by simply adding samples from a generative network G to our dataset and labeling them to an extra class INLINEFORM1 .",
"At each iteration of the training process, we first generate INLINEFORM6 samples from the generator G then we draw INLINEFORM7 samples from the real dataset. We then perform gradient descent on the AC-BLSTM and generative net G and finally update the parameters of both nets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Semi-supervised Framework",
"Our semi-supervised text classification framewrok is inspired by works BIBREF18 , BIBREF19 . We assume the original classifier classify a sample into one of INLINEFORM0 possible classes. So we can do semi-supervised learning by simply adding samples from a generative network G to our dataset and labeling them to an extra class INLINEFORM1 . And correspondingly the dimension of our classifier output increases from INLINEFORM2 to INLINEFORM3 . The configuration of our generator network G is inspired by the architecture proposed in BIBREF16 . And we modify the architecture to make it suitable to the text classification tasks. Table TABREF13 shows the configuration of each layer in the generator G. Lets assume the training batch size is INLINEFORM4 and the percentage of the generated samples among a batch training samples is INLINEFORM5 . At each iteration of the training process, we first generate INLINEFORM6 samples from the generator G then we draw INLINEFORM7 samples from the real dataset. We then perform gradient descent on the AC-BLSTM and generative net G and finally update the parameters of both nets."
],
"extractive_spans": [
"At each iteration of the training process, we first generate INLINEFORM6 samples from the generator G then we draw INLINEFORM7 samples from the real dataset.",
"We then perform gradient descent on the AC-BLSTM and generative net G and finally update the parameters of both nets."
],
"free_form_answer": "",
"highlighted_evidence": [
"Semi-supervised Framework\nOur semi-supervised text classification framewrok is inspired by works BIBREF18 , BIBREF19 . We assume the original classifier classify a sample into one of INLINEFORM0 possible classes. So we can do semi-supervised learning by simply adding samples from a generative network G to our dataset and labeling them to an extra class INLINEFORM1 . And correspondingly the dimension of our classifier output increases from INLINEFORM2 to INLINEFORM3 . The configuration of our generator network G is inspired by the architecture proposed in BIBREF16 . And we modify the architecture to make it suitable to the text classification tasks. Table TABREF13 shows the configuration of each layer in the generator G. Lets assume the training batch size is INLINEFORM4 and the percentage of the generated samples among a batch training samples is INLINEFORM5 . At each iteration of the training process, we first generate INLINEFORM6 samples from the generator G then we draw INLINEFORM7 samples from the real dataset. We then perform gradient descent on the AC-BLSTM and generative net G and finally update the parameters of both nets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"cde9288d8223da73c663360c2061e3d9d154cbfe",
"ab551c1b4c8ae83cad3083bfa578e449e413eb24"
],
"answer": [
{
"evidence": [
"We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation. We repeated each experiment 10 times and report the mean accuracy. Results of our models against other methods are listed in table TABREF16 . To the best of our knowledge, AC-BLSTM achieves the best results on five tasks.",
"FLOAT SELECTED: Table 1: Experiment results of our AC-BLSTM model compared with other methods. Performance is measured in accuracy. CNN-non-static, CNN-multichannel: Convolutional Neural Network with fine-tuned word vectors and multi-channels (Kim, 2014). C-LSTM: Combining CNN and LSTM to model sentences (Zhou et al., 2015). Molding-CNN: A feature mapping operation based on tensor products on stacked vectors (Lei et al., 2015). UPNN(no UP): User Product Neural Network without using user and product information (Tang et al., 2015). DSCNN, DSCNN-Pretrain: Dependency Sensitive Convolutional Neural Networks and with pretraind sequence autoencoders (Zhang et al., 2016a). MG-CNN(w2v+Syn+Glv), MGNC-CNN(w2v+Glv), MGNC-CNN(w2v+Syn+Glv): Multi-group norm constraint CNN with w2v:word2vec, Glv:GloVe (Pennington et al., 2014) and Syn: Syntactic embedding (Zhang et al., 2016b). NSC+LA: Neural Sentiment Classification model with local semantic attention (Chen et al., 2016a). SequenceModel(no UP): A sequence modeling-based neural network without using user and product information (Chen et al., 2016b)."
],
"extractive_spans": [],
"free_form_answer": "Model is evaluated on six tasks: TREC, MR, SST-1, SST-2, SUBJ and YELP13.",
"highlighted_evidence": [
"Results of our models against other methods are listed in table TABREF16 .",
"FLOAT SELECTED: Table 1: Experiment results of our AC-BLSTM model compared with other methods. Performance is measured in accuracy. CNN-non-static, CNN-multichannel: Convolutional Neural Network with fine-tuned word vectors and multi-channels (Kim, 2014). C-LSTM: Combining CNN and LSTM to model sentences (Zhou et al., 2015). Molding-CNN: A feature mapping operation based on tensor products on stacked vectors (Lei et al., 2015). UPNN(no UP): User Product Neural Network without using user and product information (Tang et al., 2015). DSCNN, DSCNN-Pretrain: Dependency Sensitive Convolutional Neural Networks and with pretraind sequence autoencoders (Zhang et al., 2016a). MG-CNN(w2v+Syn+Glv), MGNC-CNN(w2v+Glv), MGNC-CNN(w2v+Syn+Glv): Multi-group norm constraint CNN with w2v:word2vec, Glv:GloVe (Pennington et al., 2014) and Syn: Syntactic embedding (Zhang et al., 2016b). NSC+LA: Neural Sentiment Classification model with local semantic attention (Chen et al., 2016a). SequenceModel(no UP): A sequence modeling-based neural network without using user and product information (Chen et al., 2016b)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Defferent hyper-parameters setting across datasets."
],
"extractive_spans": [],
"free_form_answer": "TREC, MR, SST, SUBJ, YELP13",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Defferent hyper-parameters setting across datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"How do they perform semi-supervised learning?",
"What are the five evaluated tasks?"
],
"question_id": [
"05118578b46e9d93052e8a760019ca735d6513ab",
"31b9337fdfbbc33fc456552ad8c355d836d690ff"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Illustration of the AC-BLSTM architecture. The input is represented as a matrix where each row is a d-dimensional word vector. Then the ACNN is applied to obtain the feature maps, we apply three parallel asymmetric convolution operation on the input in our model, where k1, k2 and k3 stand for the length of the filter. And then the features with the same convolution window index from different convolution layer (different color) are concatenated to generate the input sequence of BLSTM. Finally all the hidden units of BLSTM are concatenated then apply a softmax layer to obtain the prediction output.",
"Figure 2: Illustration of how to deal with the incosistence output sequence length by compression.",
"Table 1: Experiment results of our AC-BLSTM model compared with other methods. Performance is measured in accuracy. CNN-non-static, CNN-multichannel: Convolutional Neural Network with fine-tuned word vectors and multi-channels (Kim, 2014). C-LSTM: Combining CNN and LSTM to model sentences (Zhou et al., 2015). Molding-CNN: A feature mapping operation based on tensor products on stacked vectors (Lei et al., 2015). UPNN(no UP): User Product Neural Network without using user and product information (Tang et al., 2015). DSCNN, DSCNN-Pretrain: Dependency Sensitive Convolutional Neural Networks and with pretraind sequence autoencoders (Zhang et al., 2016a). MG-CNN(w2v+Syn+Glv), MGNC-CNN(w2v+Glv), MGNC-CNN(w2v+Syn+Glv): Multi-group norm constraint CNN with w2v:word2vec, Glv:GloVe (Pennington et al., 2014) and Syn: Syntactic embedding (Zhang et al., 2016b). NSC+LA: Neural Sentiment Classification model with local semantic attention (Chen et al., 2016a). SequenceModel(no UP): A sequence modeling-based neural network without using user and product information (Chen et al., 2016b).",
"Table 2: Defferent hyper-parameters setting across datasets."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png"
]
} | [
"How do they perform semi-supervised learning?",
"What are the five evaluated tasks?"
] | [
[
"1611.01884-Semi-supervised Framework-0"
],
[
"1611.01884-5-Table1-1.png",
"1611.01884-Results and Discussion-0",
"1611.01884-5-Table2-1.png"
]
] | [
"On each step, a generative network is used to generate samples, then a classifier labels them to an extra class. A mix of generated data and real data is combined into a batch, then a gradient descent is performed on the batch, and the parameters are updated.",
"TREC, MR, SST, SUBJ, YELP13"
] | 327 |
1804.09692 | Factors Influencing the Surprising Instability of Word Embeddings | Despite the recent popularity of word embedding methods, there is only a small body of work exploring the limitations of these representations. In this paper, we consider one aspect of embedding spaces, namely their stability. We show that even relatively high frequency words (100-200 occurrences) are often unstable. We provide empirical evidence for how various factors contribute to the stability of word embeddings, and we analyze the effects of stability on downstream tasks. | {
"paragraphs": [
[
"Word embeddings are low-dimensional, dense vector representations that capture semantic properties of words. Recently, they have gained tremendous popularity in Natural Language Processing (NLP) and have been used in tasks as diverse as text similarity BIBREF0 , part-of-speech tagging BIBREF1 , sentiment analysis BIBREF2 , and machine translation BIBREF3 . Although word embeddings are widely used across NLP, their stability has not yet been fully evaluated and understood. In this paper, we explore the factors that play a role in the stability of word embeddings, including properties of the data, properties of the algorithm, and properties of the words. We find that word embeddings exhibit substantial instabilities, which can have implications for downstream tasks.",
"Using the overlap between nearest neighbors in an embedding space as a measure of stability (see sec:definingStability below for more information), we observe that many common embedding spaces have large amounts of instability. For example, Figure FIGREF1 shows the instability of the embeddings obtained by training word2vec on the Penn Treebank (PTB) BIBREF4 . As expected, lower frequency words have lower stability and higher frequency words have higher stability. What is surprising however about this graph is the medium-frequency words, which show huge variance in stability. This cannot be explained by frequency, so there must be other factors contributing to their instability.",
"In the following experiments, we explore which factors affect stability, as well as how this stability affects downstream tasks that word embeddings are commonly used for. To our knowledge, this is the first study comprehensively examining the factors behind instability."
],
[
"There has been much recent interest in the applications of word embeddings, as well as a small, but growing, amount of work analyzing the properties of word embeddings.",
"Here, we explore three different embedding methods: PPMI BIBREF6 , word2vec BIBREF7 , and GloVe BIBREF8 . Various aspects of the embedding spaces produced by these algorithms have been previously studied. Particularly, the effect of parameter choices has a large impact on how all three of these algorithms behave BIBREF9 . Further work shows that the parameters of the embedding algorithm word2vec influence the geometry of word vectors and their context vectors BIBREF10 . These parameters can be optimized; Hellrich and Hahn ( BIBREF11 ) posit optimal parameters for negative sampling and the number of epochs to train for. They also demonstrate that in addition to parameter settings, word properties, such as word ambiguity, affect embedding quality.",
"In addition to exploring word and algorithmic parameters, concurrent work by Antoniak and Mimno ( BIBREF12 ) evaluates how document properties affect the stability of word embeddings. We also explore the stability of embeddings, but focus on a broader range of factors, and consider the effect of stability on downstream tasks. In contrast, Antoniak and Mimno focus on using word embeddings to analyze language BIBREF13 , rather than to perform tasks.",
"At a higher level of granularity, Tan et al. ( BIBREF14 ) analyze word embedding spaces by comparing two spaces. They do this by linearly transforming one space into another space, and they show that words have different usage properties in different domains (in their case, Twitter and Wikipedia).",
"Finally, embeddings can be analyzed using second-order properties of embeddings (e.g., how a word relates to the words around it). Newman-Griffis and Fosler-Lussier ( BIBREF15 ) validate the usefulness of second-order properties, by demonstrating that embeddings based on second-order properties perform as well as the typical first-order embeddings. Here, we use second-order properties of embeddings to quantify stability."
],
[
"We define stability as the percent overlap between nearest neighbors in an embedding space. Given a word INLINEFORM0 and two embedding spaces INLINEFORM1 and INLINEFORM2 , take the ten nearest neighbors of INLINEFORM3 in both INLINEFORM4 and INLINEFORM5 . Let the stability of INLINEFORM6 be the percent overlap between these two lists of nearest neighbors. 100% stability indicates perfect agreement between the two embedding spaces, while 0% stability indicates complete disagreement. In order to find the ten nearest neighbors of a word INLINEFORM7 in an embedding space INLINEFORM8 , we measure distance between words using cosine similarity. This definition of stability can be generalized to more than two embedding spaces by considering the average overlap between two sets of embedding spaces. Let INLINEFORM12 and INLINEFORM13 be two sets of embedding spaces. Then, for every pair of embedding spaces INLINEFORM14 , where INLINEFORM15 and INLINEFORM16 , take the ten nearest neighbors of INLINEFORM17 in both INLINEFORM18 and INLINEFORM19 and calculate percent overlap. Let the stability be the average percent overlap over every pair of embedding spaces INLINEFORM20 .",
"Consider an example using this metric. Table TABREF4 shows the top ten nearest neighbors for the word international in three randomly initialized word2vec embedding spaces trained on the NYT Arts domain (see Section SECREF11 for a description of this corpus). These models share some similar words, such as metropolitan and national, but there are also many differences. On average, each pair of models has four out of ten words in common, so the stability of international across these three models is 40%.",
"The idea of evaluating ten best options is also found in other tasks, like lexical substitution BIBREF16 and word association BIBREF17 , where the top ten results are considered in the final evaluation metric. To give some intuition for how changing the number of nearest neighbors affects our stability metric, consider Figure FIGREF5 . This graph shows how the stability of GloVe changes with the frequency of the word and the number of neighbors used to calculate stability; please see the figure caption for a more detailed explanation of how this graph is structured. Within each frequency bucket, the stability is consistent across varying numbers of neighbors. Ten nearest neighbors performs approximately as well as a higher number of nearest neighbors (e.g., 100). We see this pattern for low frequency words as well as for high frequency words. Because the performance does not change substantially by increasing the number of nearest neighbors, it is computationally less intensive to use a small number of nearest neighbors. We choose ten nearest neighbors as our metric throughout the rest of the paper."
],
[
"As we saw in Figure FIGREF1 , embeddings are sometimes surprisingly unstable. To understand the factors behind the (in)stability of word embeddings, we build a regression model that aims to predict the stability of a word given: (1) properties related to the word itself; (2) properties of the data used to train the embeddings; and (3) properties of the algorithm used to construct these embeddings. Using this regression model, we draw observations about factors that play a role in the stability of word embeddings."
],
[
"We use ridge regression to model these various factors BIBREF18 . Ridge regression regularizes the magnitude of the model weights, producing a more interpretable model than non-regularized linear regression. This regularization mitigates the effects of multicollinearity (when two features are highly correlated). Specifically, given INLINEFORM0 ground-truth data points with INLINEFORM1 extracted features per data point, let INLINEFORM2 be the features for sample INLINEFORM3 and let INLINEFORM4 be the set of labels. Then, ridge regression learns a set of weights INLINEFORM5 by minimizing the least squares function with INLINEFORM6 regularization, where INLINEFORM7 is a regularization constant: INLINEFORM8 ",
"We set INLINEFORM0 . In addition to ridge regression, we tried non-regularized linear regression. We obtained comparable results, but many of the weights were very large or very small, making them hard to interpret.",
"The goodness of fit of a regression model is measured using the coefficient of determination INLINEFORM0 . This measures how much variance in the dependent variable INLINEFORM1 is captured by the independent variables INLINEFORM2 . A model that always predicts the expected value of INLINEFORM3 , regardless of the input features, will receive an INLINEFORM4 score of 0. The highest possible INLINEFORM5 score is 1, and the INLINEFORM6 score can be negative.",
"Given this model, we create training instances by observing the stability of a large number of words across various combinations of two embedding spaces. Specifically, given a word INLINEFORM0 and two embedding spaces INLINEFORM1 and INLINEFORM2 , we encode properties of the word INLINEFORM3 , as well as properties of the datasets and the algorithms used to train the embedding spaces INLINEFORM4 and INLINEFORM5 . The target value associated with this features is the stability of the word INLINEFORM6 across embedding spaces INLINEFORM7 and INLINEFORM8 . We repeat this process for more than 2,500 words, several datasets, and three embedding algorithms.",
"Specifically, we consider all the words present in all seven of the data domains we are using (see Section SECREF11 ), 2,521 words in total. Using the feature categories described below, we generate a feature vector for each unique word, dataset, algorithm, and dimension size, resulting in a total of 27,794,025 training instances. To get good average estimates for each embedding algorithm, we train each embedding space five times, randomized differently each time (this does not apply to PPMI, which has no random component). We then train a ridge regression model on these instances. The model is trained to predict the stability of word INLINEFORM0 across embedding spaces INLINEFORM1 and INLINEFORM2 (where INLINEFORM3 and INLINEFORM4 are not necessarily trained using the same algorithm, parameters, or training data). Because we are using this model to learn associations between certain features and stability, no test data is necessary. The emphasis is on the model itself, not on the model's performance on a specific task.",
"We describe next each of the three main categories of factors examined in the model. An example of these features is given in Table TABREF7 ."
],
[
"We encode several features that capture attributes of the word INLINEFORM0 . First, we use the primary and secondary part-of-speech (POS) of the word. Both of these are represented as bags-of-words of all possible POS, and are determined by looking at the primary (most frequent) and secondary (second most frequent) POS of the word in the Brown corpus BIBREF20 . If the word is not present in the Brown corpus, then all of these POS features are set to zero.",
"To get a coarse-grained representation of the polysemy of the word, we consider the number of different POS present. For a finer-grained representation, we use the number of different WordNet senses associated with the word BIBREF21 , BIBREF22 .",
"We also consider the number of syllables in a word, determined using the CMU Pronuncing Dictionary BIBREF23 . If the word is not present in the dictionary, then this is set to zero."
],
[
"Data features capture properties of the training data (and the word in relation to the training data). For this model, we gather data from two sources: New York Times (NYT) BIBREF24 and Europarl BIBREF25 . Overall, we consider seven domains of data: (1) NYT - U.S., (2) NYT - New York and Region, (3) NYT - Business, (4) NYT - Arts, (5) NYT - Sports, (6) All of the data from domains 1-5 (denoted “All NYT\"), and (7) All of English Europarl. Table TABREF10 shows statistics about these datasets. The first five domains are chosen because they are the top five most common categories of news articles present in the NYT corpus. They are smaller than “All NYT\" and Europarl, and they have a narrow topical focus. The “All NYT\" domain is more diverse topically and larger than the first five domains. Finally, the Europarl domain is the largest domain, and it is focused on a single topic (European Parliamentary politics). These varying datasets allow us to consider how data-dependent properties affect stability.",
"We use several features related to domain. First, we consider the raw frequency of word INLINEFORM0 in both the domain of data used for embedding space INLINEFORM1 and the domain of data for space INLINEFORM2 . To make our regression model symmetric, we effectively encode three features: the higher raw frequency (between the two), the lower raw frequency, and the absolute difference in raw frequency.",
"We also consider the vocabulary size of each corpus (again, symmetrically) and the percent overlap between corpora vocabulary, as well as the domain of each of the two corpora, represented as a bag-of-words of domains. Finally, we consider whether the two corpora are from the same domain.",
"Our final data-level features explore the role of curriculum learning in stability. It has been posited that the order of the training data affects the performance of certain algorithms, and previous work has shown that for some neural network-based tasks, a good training data order (curriculum learning strategy) can improve performance BIBREF26 . Curriculum learning has been previously explored for word2vec, where it has been found that optimizing training data order can lead to small improvements on common NLP tasks BIBREF1 . Of the embedding algorithms we consider, curriculum learning only affects word2vec. Because GloVe and PPMI use the data to learn a complete matrix before building embeddings, the order of the training data will not affect their performance. To measure the effects of training data order, we include as features the first appearance of word INLINEFORM0 in the dataset for embedding space INLINEFORM1 and the first appearance of INLINEFORM2 in the dataset for embedding space INLINEFORM3 (represented as percentages of the total number of training sentences). We further include the absolute difference between these percentages."
],
[
"In addition to word and data properties, we encode features about the embedding algorithms. These include the different algorithms being used, as well as the different parameter settings of these algorithms. Here, we consider three embedding algorithms, word2vec, GloVe, and PPMI. The choice of algorithm is represented in our feature vector as a bag-of-words.",
"PPMI creates embeddings by first building a positive pointwise mutual information word-context matrix, and then reducing the dimensionality of this matrix using SVD BIBREF6 . A more recent word embedding algorithm, word2vec (skip-gram model) BIBREF7 uses a shallow neural network to learn word embeddings by predicting context words. Another recent method for creating word embeddings, GloVe, is based on factoring a matrix of ratios of co-occurrence probabilities BIBREF8 .",
"For each algorithm, we choose common parameter settings. For word2vec, two of the parameters that need to be chosen are window size and minimum count. Window size refers to the maximum distance between the current word and the predicted word (e.g., how many neighboring words to consider for each target word). Any word appearing less than the minimum count number of times in the corpus is discarded and not considered in the word2vec algorithm. For both of these features, we choose standard parameter settings, namely, a window size of 5 and a minimum count of 5. For GloVe, we also choose standard parameters. We use 50 iterations of the algorithm for embedding dimensions less than 300, and 100 iterations for higher dimensions.",
"We also add a feature reflecting the embedding dimension, namely one of five embedding dimensions: 50, 100, 200, 400, or 800."
],
[
"Overall, the regression model achieves a coefficient of determination ( INLINEFORM0 ) score of 0.301 on the training data, which indicates that the regression has learned a linear model that reasonably fits the training data given. Using the regression model, we can analyze the weights corresponding to each of the features being considered, shown in Table TABREF14 . These weights are difficult to interpret, because features have different distributions and ranges. However, we make several general observations about the stability of word embeddings.",
"",
"Observation 1. Curriculum learning is important. This is evident because the top two features (by magnitude) of the regression model capture where the word first appears in the training data. Figure FIGREF15 shows trends between training data position and stability in the PTB. This figure contrasts word2vec with GloVe (which is order invariant).",
"To further understand the effect of curriculum learning on the model, we train a regression model with all of the features except the curriculum learning features. This model achieves an INLINEFORM0 score of 0.291 (compared to the full model's score of 0.301). This indicates that curriculum learning is a factor in stability.",
"",
"Observation 2. POS is one of the biggest factors in stability. Table TABREF14 shows that many of the top weights belong to POS-related features (both primary and secondary POS). Table TABREF18 compares average stabilities for each primary POS. Here we see that the most stable POS are numerals, verbs, and determiners, while the least stable POS are punctuation marks, adpositions, and particles.",
"",
"Observation 3. Stability within domains is greater than stability across domains. Table TABREF14 shows that many of the top factors are domain-related. Figure FIGREF19 shows the results of the regression model broken down by domain. This figure shows the highest stabilities appearing on the diagonal of the matrix, where the two embedding spaces both belong to the same domain. The stabilities are substantially lower off the diagonal.",
"Figure FIGREF19 also shows that “All NYT\" generalizes across the other NYT domains better than Europarl, but not as well as in-domain data (“All NYT\" encompasses data from US, NY, Business, Arts, and Sports). This is true even though Europarl is much larger than “All NYT\".",
"",
"Observation 4. Overall, GloVe is the most stable embedding algorithm. This is particularly apparent when only in-domain data is considered, as in Figure FIGREF19 . PPMI achieves similar stability, while word2vec lags considerably behind.",
"To further compare word2vec and GloVe, we look at how the stability of word2vec changes with the frequency of the word and the number of neighbors used to calculate stability. This is shown in Figure FIGREF20 and is directly comparable to Figure FIGREF5 . Surprisingly, the stability of word2vec varies substantially with the frequency of the word. For lower-frequency words, as the number of nearest neighbors increases, the stability increases approximately exponentially. For high-frequency words, the lowest and highest number of nearest neighbors show the greatest stability. This is different than GloVe, where stability remains reasonably constant across word frequencies, as shown in Figure FIGREF5 . The behavior we see here agrees with the conclusion of BIBREF10 , who find that GloVe exhibits more well-behaved geometry than word2vec.",
"",
"Observation 5. Frequency is not a major factor in stability. To better understand the role that frequency plays in stability, we run separate ablation experiments comparing regression models with frequency features to regression models without frequency features. Our current model (using raw frequency) achieves an INLINEFORM0 score of 0.301. Comparably, a model using the same features, but with normalized instead of raw frequency, achieves a score of 0.303. Removing frequency from either regression model gives a score of 0.301. This indicates that frequency is not a major factor in stability, though normalized frequency is a larger factor than raw frequency.",
"Finally, we look at regression models using only frequency features. A model using only raw frequency features has an INLINEFORM0 score of 0.008, while a model with only normalized frequency features has an INLINEFORM1 score of 0.0059. This indicates that while frequency is not a major factor in stability, it is also not negligible. As we pointed out in the introduction, frequency does correlate with stability (Figure FIGREF1 ). However, in the presence of all of these other features, frequency becomes a minor factor."
],
[
"Word embeddings are used extensively as the first stage of neural networks throughout NLP. Typically, embeddings are initalized based on a vector trained with word2vec or GloVe and then further modified as part of training for the target task. We study two downstream tasks to see whether stability impacts performance.",
"Since we are interested in seeing the impact of word vector stability, we choose tasks that have an intuitive evaluation at the word level: word similarity and POS tagging."
],
[
"To model word similarity, we use 300-dimensional word2vec embedding spaces trained on the PTB. For each pair of words, we take the cosine similarity between those words averaged over ten randomly initialized embedding spaces. We consider three datasets for evaluating word similarity: WS353 (353 pairs) BIBREF27 , MTurk287 (287 pairs) BIBREF28 , and MTurk771 (771 pairs) BIBREF29 . For each dataset, we normalize the similarity to be in the range INLINEFORM0 , and we take the absolute difference between our predicted value and the ground-truth value. Figure FIGREF22 shows the results broken down by stability of the two words (we always consider Word 1 to be the more stable word in the pair). Word similarity pairs where one of the words is not present in the PTB are omitted.",
"We find that these word similarity datasets do not contain a balanced distribution of words with respect to stability; there are substantially more unstable words than there are stable words. However, we still see a slight trend: As the combined stability of the two words increases, the average absolute error decreases, as reflected by the lighter color of the cells in Figure FIGREF22 while moving away from the (0,0) data point."
],
[
"Part-of-speech (POS) tagging is a substantially more complicated task than word similarity. We use a bidirectional LSTM implemented using DyNet BIBREF30 . We train nine sets of 128-dimensional word embeddings with word2vec using different random seeds. The LSTM has a single layer and 50-dimensional hidden vectors. Outputs are passed through a tanh layer before classification. To train, we use SGD with a learning rate of 0.1, an input noise rate of 0.1, and recurrent dropout of 0.4.",
"This simple model is not state-of-the-art, scoring 95.5% on the development set, but the word vectors are a central part of the model, providing a clear signal of their impact. For each word, we group tokens based on stability and frequency. Figure FIGREF24 shows the results. Fixing the word vectors provides a clearer pattern in the results, but also leads to much worse performance: 85.0% on the development set. Based on these results, it seems that training appears to compensate for stability. This hypothesis is supported by Figure FIGREF24 , which shows the similarity between the original word vectors and the shifted word vectors produced by the training. In general, lower stability words are shifted more during training.",
"Understanding how the LSTM is changing the input embeddings is useful information for tasks with limited data, and it could allow us to improve embeddings and LSTM training for these low-resource tasks."
],
[
"Word embeddings are surprisingly variable, even for relatively high frequency words. Using a regression model, we show that domain and part-of-speech are key factors of instability. Downstream experiments show that stability impacts tasks using embedding-based features, though allowing embeddings to shift during training can reduce this effect. In order to use the most stable embedding spaces for future tasks, we recommend either using GloVe or learning a good curriculum for word2vec training data. We also recommend using in-domain embeddings whenever possible.",
"The code used in the experiments described in this paper is publicly available from http://lit.eecs.umich.edu/downloads.html."
],
[
"We would like to thank Ben King and David Jurgens for helpful discussions about this paper, as well as our anonymous reviewers for useful feedback. This material is based in part upon work supported by the National Science Foundation (NSF #1344257) and the Michigan Institute for Data Science (MIDAS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or MIDAS."
]
],
"section_name": [
"Introduction",
"Related Work",
"Defining Stability",
"Factors Influencing Stability",
"Methodology",
"Word Properties",
"Data Properties",
"Algorithm Properties",
"Lessons Learned: What Contributes to the Stability of an Embedding",
"Impact of Stability on Downstream Tasks",
"Word Similarity",
"Part-of-Speech Tagging",
"Conclusion and Recommendations",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"2fb49de085e936f13a771b68d6724ad0c9f7b73c",
"890fee7cb82194844d9ce6026b4c060deee5e108"
],
"answer": [
{
"evidence": [
"Word embeddings are used extensively as the first stage of neural networks throughout NLP. Typically, embeddings are initalized based on a vector trained with word2vec or GloVe and then further modified as part of training for the target task. We study two downstream tasks to see whether stability impacts performance.",
"Since we are interested in seeing the impact of word vector stability, we choose tasks that have an intuitive evaluation at the word level: word similarity and POS tagging."
],
"extractive_spans": [
"word similarity",
"POS tagging"
],
"free_form_answer": "",
"highlighted_evidence": [
"We study two downstream tasks to see whether stability impacts performance.\n\nSince we are interested in seeing the impact of word vector stability, we choose tasks that have an intuitive evaluation at the word level: word similarity and POS tagging."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since we are interested in seeing the impact of word vector stability, we choose tasks that have an intuitive evaluation at the word level: word similarity and POS tagging."
],
"extractive_spans": [
"word similarity",
"POS tagging"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since we are interested in seeing the impact of word vector stability, we choose tasks that have an intuitive evaluation at the word level: word similarity and POS tagging."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"398b058b46b748b95044049b4a13341d26b95a96",
"c587f50489eda53892dddf2a84530d29af95bd10"
],
"answer": [
{
"evidence": [
"To further understand the effect of curriculum learning on the model, we train a regression model with all of the features except the curriculum learning features. This model achieves an INLINEFORM0 score of 0.291 (compared to the full model's score of 0.301). This indicates that curriculum learning is a factor in stability.",
"Observation 2. POS is one of the biggest factors in stability. Table TABREF14 shows that many of the top weights belong to POS-related features (both primary and secondary POS). Table TABREF18 compares average stabilities for each primary POS. Here we see that the most stable POS are numerals, verbs, and determiners, while the least stable POS are punctuation marks, adpositions, and particles.",
"Observation 3. Stability within domains is greater than stability across domains. Table TABREF14 shows that many of the top factors are domain-related. Figure FIGREF19 shows the results of the regression model broken down by domain. This figure shows the highest stabilities appearing on the diagonal of the matrix, where the two embedding spaces both belong to the same domain. The stabilities are substantially lower off the diagonal."
],
"extractive_spans": [
"curriculum learning",
"POS",
"domains."
],
"free_form_answer": "",
"highlighted_evidence": [
"This indicates that curriculum learning is a factor in stability.",
"POS is one of the biggest factors in stability.",
"Stability within domains is greater than stability across domains. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Overall, the regression model achieves a coefficient of determination ( INLINEFORM0 ) score of 0.301 on the training data, which indicates that the regression has learned a linear model that reasonably fits the training data given. Using the regression model, we can analyze the weights corresponding to each of the features being considered, shown in Table TABREF14 . These weights are difficult to interpret, because features have different distributions and ranges. However, we make several general observations about the stability of word embeddings.",
"Observation 1. Curriculum learning is important. This is evident because the top two features (by magnitude) of the regression model capture where the word first appears in the training data. Figure FIGREF15 shows trends between training data position and stability in the PTB. This figure contrasts word2vec with GloVe (which is order invariant).",
"Observation 2. POS is one of the biggest factors in stability. Table TABREF14 shows that many of the top weights belong to POS-related features (both primary and secondary POS). Table TABREF18 compares average stabilities for each primary POS. Here we see that the most stable POS are numerals, verbs, and determiners, while the least stable POS are punctuation marks, adpositions, and particles."
],
"extractive_spans": [
"POS is one of the biggest factors in stability"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, we make several general observations about the stability of word embeddings.\n\nObservation 1. Curriculum learning is important.",
"Observation 2. POS is one of the biggest factors in stability."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"13745385c36b1941cecdd2ea84337fb9be926a8f",
"efdd60fdecb6bc39f223feb2acd0ae47923c89ed"
],
"answer": [
{
"evidence": [
"Defining Stability",
"We define stability as the percent overlap between nearest neighbors in an embedding space. Given a word INLINEFORM0 and two embedding spaces INLINEFORM1 and INLINEFORM2 , take the ten nearest neighbors of INLINEFORM3 in both INLINEFORM4 and INLINEFORM5 . Let the stability of INLINEFORM6 be the percent overlap between these two lists of nearest neighbors. 100% stability indicates perfect agreement between the two embedding spaces, while 0% stability indicates complete disagreement. In order to find the ten nearest neighbors of a word INLINEFORM7 in an embedding space INLINEFORM8 , we measure distance between words using cosine similarity. This definition of stability can be generalized to more than two embedding spaces by considering the average overlap between two sets of embedding spaces. Let INLINEFORM12 and INLINEFORM13 be two sets of embedding spaces. Then, for every pair of embedding spaces INLINEFORM14 , where INLINEFORM15 and INLINEFORM16 , take the ten nearest neighbors of INLINEFORM17 in both INLINEFORM18 and INLINEFORM19 and calculate percent overlap. Let the stability be the average percent overlap over every pair of embedding spaces INLINEFORM20 ."
],
"extractive_spans": [
"We define stability as the percent overlap between nearest neighbors in an embedding space.",
"0% stability indicates complete disagreement"
],
"free_form_answer": "",
"highlighted_evidence": [
"Defining Stability\nWe define stability as the percent overlap between nearest neighbors in an embedding space. Given a word INLINEFORM0 and two embedding spaces INLINEFORM1 and INLINEFORM2 , take the ten nearest neighbors of INLINEFORM3 in both INLINEFORM4 and INLINEFORM5 . Let the stability of INLINEFORM6 be the percent overlap between these two lists of nearest neighbors. 100% stability indicates perfect agreement between the two embedding spaces, while 0% stability indicates complete disagreement."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 2: Stability of GloVe on the PTB. Stability is measured across ten randomized embedding spaces trained on the training data of the PTB (determined using language modeling splits (Mikolov et al., 2010)). Each word is placed in a frequency bucket (left y-axis) and stability is determined using a varying number of nearest neighbors for each frequency bucket (right yaxis). Each row is normalized, and boxes with more than 0.01 of the row’s mass are outlined."
],
"extractive_spans": [],
"free_form_answer": "An embedding is unstable if it has a low number of nearest neighbor embeddings of the words within the same frequency bucket.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: Stability of GloVe on the PTB. Stability is measured across ten randomized embedding spaces trained on the training data of the PTB (determined using language modeling splits (Mikolov et al., 2010)). Each word is placed in a frequency bucket (left y-axis) and stability is determined using a varying number of nearest neighbors for each frequency bucket (right yaxis). Each row is normalized, and boxes with more than 0.01 of the row’s mass are outlined."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2f07d6bb474ea09f5c6cb8141616e8552d4e1ea6",
"45547c161c04f097b0ccb13c015a3ed6eaa2764b"
],
"answer": [
{
"evidence": [
"In addition to word and data properties, we encode features about the embedding algorithms. These include the different algorithms being used, as well as the different parameter settings of these algorithms. Here, we consider three embedding algorithms, word2vec, GloVe, and PPMI. The choice of algorithm is represented in our feature vector as a bag-of-words."
],
"extractive_spans": [
" word2vec, GloVe, and PPMI"
],
"free_form_answer": "",
"highlighted_evidence": [
"Here, we consider three embedding algorithms, word2vec, GloVe, and PPMI."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In addition to word and data properties, we encode features about the embedding algorithms. These include the different algorithms being used, as well as the different parameter settings of these algorithms. Here, we consider three embedding algorithms, word2vec, GloVe, and PPMI. The choice of algorithm is represented in our feature vector as a bag-of-words."
],
"extractive_spans": [
"word2vec",
"GloVe",
"PPMI"
],
"free_form_answer": "",
"highlighted_evidence": [
"Here, we consider three embedding algorithms, word2vec, GloVe, and PPMI. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What downstream tasks are explored?",
"What factors contribute to the stability of the word embeddings?",
"How is unstability defined?",
"What embedding algorithms are explored?"
],
"question_id": [
"389ff1927ba9fc8bac50959fc09f30c2143cc14e",
"b968bd264995cd03d7aaad1baba1838c585ec909",
"afcd1806b931a97c0679f873a71b825e668f2b75",
"01c8c3836467a4399cc37e86244b5bdc5dda2401"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Stability of word2vec as a property of frequency in the PTB. Stability is measured across ten randomized embedding spaces trained on the training portion of the PTB (determined using language modeling splits (Mikolov et al., 2010)). Each word is placed in a frequency bucket (x-axis), and each column (frequency bucket) is normalized.",
"Table 1: Top ten most similar words for the word international in three randomly intialized word2vec models trained on the NYT Arts Domain. Words in all three lists are in bold; words in only two of the lists are italicized.",
"Figure 2: Stability of GloVe on the PTB. Stability is measured across ten randomized embedding spaces trained on the training data of the PTB (determined using language modeling splits (Mikolov et al., 2010)). Each word is placed in a frequency bucket (left y-axis) and stability is determined using a varying number of nearest neighbors for each frequency bucket (right yaxis). Each row is normalized, and boxes with more than 0.01 of the row’s mass are outlined.",
"Table 2: Consider the word international in two embedding spaces. Suppose embedding spaceA is trained using word2vec (embedding dimension 100) on the NYT Arts domain, and embedding space B is trained using PPMI (embedding dimension 100) on Europarl. This table summarizes the resulting features for this word across the two embedding spaces.",
"Table 3: Dataset statistics.",
"Table 4: Regression weights with a magnitude greater than 0.1, sorted by magnitude.",
"Figure 3: Stability of both word2vec and GloVe as properties of the starting word position in the training data of the PTB. Stability is measured across ten randomized embedding spaces trained on the training data of the PTB (determined using language modeling splits (Mikolov et al., 2010)). Boxes with more than 0.02% of the total vocabulary mass are outlined.",
"Table 5: Percent stability broken down by part-ofspeech, ordered by decreasing stability.",
"Figure 4: Percent stability broken down by domain.",
"Figure 5: Percent stability broken down between algorithm (in-domain data only).",
"Figure 7: Absolute error for word similarity.",
"Figure 6: Stability of word2vec on the PTB. Stability is measured across ten randomized embedding spaces trained on the training data of the PTB (determined using language modeling splits (Mikolov et al., 2010)). Each word is placed in a frequency bucket (left y-axis) and stability is determined using a varying number of nearest neighbors for each frequency bucket (right yaxis). Each row is normalized, and boxes with more than 0.01 of the row’s mass are outlined.",
"Figure 8: Results for POS tagging. (a) and (b) show average POS tagging error divided by the number of tokens (darker is more errors) while either keeping word vectors fixed or not during training. (c) shows word vector shift, measured as cosine similarity between initial and final vectors. In all graphs, words are bucketed by frequency and stability."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Figure3-1.png",
"7-Table5-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png",
"8-Figure7-1.png",
"8-Figure6-1.png",
"9-Figure8-1.png"
]
} | [
"How is unstability defined?"
] | [
[
"1804.09692-3-Figure2-1.png",
"1804.09692-Defining Stability-0"
]
] | [
"An embedding is unstable if it has a low number of nearest neighbor embeddings of the words within the same frequency bucket."
] | 328 |
1606.02892 | Linguistic Input Features Improve Neural Machine Translation | Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder--decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of-speech tags, and syntactic dependency labels as input features to English<->German, and English->Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An open-source implementation of our neural MT system is available, as are sample files and configurations. | {
"paragraphs": [
[
"Neural machine translation has recently achieved impressive results BIBREF0 , BIBREF1 , while learning from raw, sentence-aligned parallel text and using little in the way of external linguistic information. However, we hypothesize that various levels of linguistic annotation can be valuable for neural machine translation. Lemmatisation can reduce data sparseness, and allow inflectional variants of the same word to explicitly share a representation in the model. Other types of annotation, such as parts-of-speech (POS) or syntactic dependency labels, can help in disambiguation. In this paper we investigate whether linguistic information is beneficial to neural translation models, or whether their strong learning capability makes explicit linguistic features redundant.",
"Let us motivate the use of linguistic features using examples of actual translation errors by neural MT systems. In translation out of English, one problem is that the same surface word form may be shared between several word types, due to homonymy or word formation processes such as conversion. For instance, close can be a verb, adjective, or noun, and these different meanings often have distinct translations into other languages. Consider the following English INLINEFORM0 German example:",
"For the English source sentence in Example SECREF4 (our translation in Example SECREF5 ), a neural MT system (our baseline system from Section SECREF4 ) mistranslates close as a verb, and produces the German verb schließen (Example SECREF6 ), even though close is an adjective in this sentence, which has the German translation nah. Intuitively, part-of-speech annotation of the English input could disambiguate between verb, noun, and adjective meanings of close.",
"As a second example, consider the following German INLINEFORM0 English example:",
"German main clauses have a verb-second (V2) word order, whereas English word order is generally SVO. The German sentence (Example UID7 ; English reference in Example UID8 ) topicalizes the predicate gefährlich 'dangerous', putting the subject die Route 'the route' after the verb. Our baseline system (Example UID9 ) retains the original word order, which is highly unusual in English, especially for prose in the news domain. A syntactic annotation of the source sentence could support the attentional encoder-decoder in learning which words in the German source to attend (and translate) first.",
"We will investigate the usefulness of linguistic features for the language pair German INLINEFORM0 English, considering the following linguistic features:",
"The inclusion of lemmas is motivated by the hope for a better generalization over inflectional variants of the same word form. The other linguistic features are motivated by disambiguation, as discussed in our introductory examples."
],
[
"We follow the neural machine translation architecture by DBLP:journals/corr/BahdanauCB14, which we will briefly summarize here.",
"The neural machine translation system is implemented as an attentional encoder-decoder network with recurrent neural networks.",
"The encoder is a bidirectional neural network with gated recurrent units BIBREF3 that reads an input sequence INLINEFORM0 and calculates a forward sequence of hidden states INLINEFORM1 , and a backward sequence INLINEFORM2 . The hidden states INLINEFORM3 and INLINEFORM4 are concatenated to obtain the annotation vector INLINEFORM5 .",
"The decoder is a recurrent neural network that predicts a target sequence INLINEFORM0 . Each word INLINEFORM1 is predicted based on a recurrent hidden state INLINEFORM2 , the previously predicted word INLINEFORM3 , and a context vector INLINEFORM4 . INLINEFORM5 is computed as a weighted sum of the annotations INLINEFORM6 . The weight of each annotation INLINEFORM7 is computed through an alignment model INLINEFORM8 , which models the probability that INLINEFORM9 is aligned to INLINEFORM10 . The alignment model is a single-layer feedforward neural network that is learned jointly with the rest of the network through backpropagation.",
"A detailed description can be found in BIBREF0 , although our implementation is based on a slightly modified form of this architecture, released for the dl4mt tutorial. Training is performed on a parallel corpus with stochastic gradient descent. For translation, a beam search with small beam size is employed."
],
[
"Our main innovation over the standard encoder-decoder architecture is that we represent the encoder input as a combination of features BIBREF4 .",
"We here show the equation for the forward states of the encoder (for the simple RNN case; consider BIBREF0 for GRU): DISPLAYFORM0 ",
"where INLINEFORM0 is a word embedding matrix, INLINEFORM1 , INLINEFORM2 are weight matrices, with INLINEFORM3 and INLINEFORM4 being the word embedding size and number of hidden units, respectively, and INLINEFORM5 being the vocabulary size of the source language.",
"We generalize this to an arbitrary number of features INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 is the vector concatenation, INLINEFORM1 are the feature embedding matrices, with INLINEFORM2 , and INLINEFORM3 is the vocabulary size of the INLINEFORM4 th feature. In other words, we look up separate embedding vectors for each feature, which are then concatenated. The length of the concatenated vector matches the total embedding size, and all other parts of the model remain unchanged."
],
[
"Our generalized model of the previous section supports an arbitrary number of input features. In this paper, we will focus on a number of well-known linguistic features. Our main empirical question is if providing linguistic features to the encoder improves the translation quality of neural machine translation systems, or if the information emerges from training encoder-decoder models on raw text, making its inclusion via explicit features redundant. All linguistic features are predicted automatically; we use Stanford CoreNLP BIBREF5 , BIBREF6 , BIBREF7 to annotate the English input for English INLINEFORM0 German, and ParZu BIBREF8 to annotate the German input for German INLINEFORM1 English. We here discuss the individual features in more detail."
],
[
"Using lemmas as input features guarantees sharing of information between word forms that share the same base form. In principle, neural models can learn that inflectional variants are semantically related, and represent them as similar points in the continuous vector space BIBREF9 . However, while this has been demonstrated for high-frequency words, we expect that a lemmatized representation increases data efficiency; low-frequency variants may even be unknown to word-level models. With character- or subword-level models, it is unclear to what extent they can learn the similarity between low-frequency word forms that share a lemma, especially if the word forms are superficially dissimilar. Consider the following two German word forms, which share the lemma liegen `lie':",
"liegt `lies' (3.p.sg. present)",
"läge `lay' (3.p.sg. subjunctive II)",
"The lemmatisers we use are based on finite-state methods, which ensures a large coverage, even for infrequent word forms. We use the Zmorge analyzer for German BIBREF10 , BIBREF11 , and the lemmatiser in the Stanford CoreNLP toolkit for English BIBREF6 ."
],
[
"In our experiments, we operate on the level of subwords to achieve open-vocabulary translation with a fixed symbol vocabulary, using a segmentation based on byte-pair encoding (BPE) BIBREF12 . We note that in BPE segmentation, some symbols are potentially ambiguous, and can either be a separate word, or a subword segment of a larger word. Also, text is represented as a sequence of subword units with no explicit word boundaries, but word boundaries are potentially helpful to learn which symbols to attend to, and when to forget information in the recurrent layers. We propose an annotation of subword structure similar to popular IOB format for chunking and named entity recognition, marking if a symbol in the text forms the beginning (B), inside (I), or end (E) of a word. A separate tag (O) is used if a symbol corresponds to the full word."
],
[
"For German INLINEFORM0 English, the parser annotates the German input with morphological features. Different word types have different sets of features – for instance, nouns have case, number and gender, while verbs have person, number, tense and aspect – and features may be underspecified. We treat the concatenation of all morphological features of a word, using a special symbol for underspecified features, as a string, and treat each such string as a separate feature value."
],
[
"In our introductory examples, we motivated POS tags and dependency labels as possible disambiguators. Each word is associated with one POS tag, and one dependency label. The latter is the label of the edge connecting a word to its syntactic head, or 'ROOT' if the word has no syntactic head."
],
[
"We segment rare words into subword units using BPE. The subword tags encode the segmentation of words into subword units, and need no further modification. All other features are originally word-level features. To annotate the segmented source text with features, we copy the word's feature value to all its subword units. An example is shown in Figure FIGREF26 ."
],
[
"We evaluate our systems on the WMT16 shared translation task English INLINEFORM0 German. The parallel training data consists of about 4.2 million sentence pairs.",
"To enable open-vocabulary translation, we encode words via joint BPE BIBREF12 , learning 89500 merge operations on the concatenation of the source and target side of the parallel training data. We use minibatches of size 80, a maximum sentence length of 50, word embeddings of size 500, and hidden layers of size 1024. We clip the gradient norm to 1.0 BIBREF13 . We train the models with Adadelta BIBREF14 , reshuffling the training corpus between epochs. We validate the model every 10000 minibatches via Bleu and perplexity on a validation set (newstest2013).",
"For neural MT, perplexity is a useful measure of how well the model can predict a reference translation given the source sentence. Perplexity is thus a good indicator of whether input features provide any benefit to the models, and we report the best validation set perplexity of each experiment. To evaluate whether the features also increase translation performance, we report case-sensitive Bleu scores with mteval-13b.perl on two test sets, newstest2015 and newstest2016. We also report chrF3 BIBREF15 , a character n-gram F INLINEFORM0 score which was found to correlate well with human judgments, especially for translations out of English BIBREF16 . The two metrics may occasionally disagree, partly because they are highly sensitive to the length of the output. Bleu is precision-based, whereas chrF3 considers both precision and recall, with a bias for recall. For Bleu, we also report whether differences between systems are statistically significant according to a bootstrap resampling significance test BIBREF17 .",
"We train models for about a week, and report results for an ensemble of the 4 last saved models (with models saved every 12 hours). The ensemble serves to smooth the variance between single models.",
"Decoding is performed with beam search with a beam size of 12.",
"To ensure that performance improvements are not simply due to an increase in the number of model parameters, we keep the total size of the embedding layer fixed to 500. Table TABREF29 lists the embedding size we use for linguistic features – the embedding layer size of the word-level feature varies, and is set to bring the total embedding layer size to 500. If we include the lemma feature, we roughly split the embedding vector one-to-two between the lemma feature and the word feature. The table also shows the network vocabulary size; for all features except lemmas, we can represent all feature values in the network vocabulary – in the case of words, this is due to BPE segmentation. For lemmas, we choose the same vocabulary size as for words, replacing rare lemmas with a special UNK symbol.",
"2015arXiv151106709S report large gains from using monolingual in-domain training data, automatically back-translated into the source language to produce a synthetic parallel training corpus. We use the synthetic corpora produced in these experiments (3.6–4.2 million sentence pairs), and we trained systems which include this data to compare against the state of the art. We note that our experiments with this data entail a syntactic annotation of automatically translated data, which may be a source of noise. For the systems with synthetic data, we double the training time to two weeks.",
"We also evaluate linguistic features for the lower-resourced translation direction English INLINEFORM0 Romanian, with 0.6 million sentence pairs of parallel training data, and 2.2 million sentence pairs of synthetic parallel data. We use the same linguistic features as for English INLINEFORM1 German. We follow sennrich-wmt16 in the configuration, and use dropout for the English INLINEFORM2 Romanian systems. We drop out full words (both on the source and target side) with a probability of 0.1. For all other layers, the dropout probability is set to 0.2."
],
[
"Table TABREF32 shows our main results for German INLINEFORM0 English, and English INLINEFORM1 German. The baseline system is a neural MT system with only one input feature, the (sub)words themselves. For both translation directions, linguistic features improve the best perplexity on the development data (47.3 INLINEFORM2 46.2, and 54.9 INLINEFORM3 52.9, respectively). For German INLINEFORM4 English, the linguistic features lead to an increase of 1.5 Bleu (31.4 INLINEFORM5 32.9) and 0.5 chrF3 (58.0 INLINEFORM6 58.5), on the newstest2016 test set. For English INLINEFORM7 German, we observe improvements of 0.6 Bleu (27.8 INLINEFORM8 28.4) and 1.2 chrF3 (56.0 INLINEFORM9 57.2).",
"To evaluate the effectiveness of different linguistic features in isolation, we performed contrastive experiments in which only a single feature was added to the baseline. Results are shown in Table TABREF33 . Unsurprisingly, the combination of all features (Table TABREF32 ) gives the highest improvement, averaged over metrics and test sets, but most features are beneficial on their own. Subword tags give small improvements for English INLINEFORM0 German, but not for German INLINEFORM1 English. All other features outperform the baseline in terms of perplexity, and yield significant improvements in Bleu on at least one test set. The gain from different features is not fully cumulative; we note that the information encoded in different features overlaps. For instance, both the dependency labels and the morphological features encode the distinction between German subjects and accusative objects, the former through different labels (subj and obja), the latter through grammatical case (nominative and accusative).",
"We also evaluated adding linguistic features to a stronger baseline, which includes synthetic parallel training data. In addition, we compare our neural systems against phrase-based (PBSMT) and syntax-based (SBSMT) systems by BIBREF18 , all of which make use of linguistic annotation on the source and/or target side. Results are shown in Table TABREF34 . For German INLINEFORM0 English, we observe similar improvements in the best development perplexity (45.2 INLINEFORM1 44.1), test set Bleu (37.5 INLINEFORM2 38.5) and chrF3 (62.2 INLINEFORM3 62.8). Our test set Bleu is on par to the best submitted system to this year's WMT 16 shared translation task, which is similar to our baseline MT system, but which also uses a right-to-left decoder for reranking BIBREF19 . We expect that linguistic input features and bidirectional decoding are orthogonal, and that we could obtain further improvements by combining the two.",
"For English INLINEFORM0 German, improvements in development set perplexity carry over (49.7 INLINEFORM1 48.4), but we see only small, non-significant differences in Bleu and chrF3. While we cannot clearly account for the discrepancy between perplexity and translation metrics, factors that potentially lower the usefulness of linguistic features in this setting are the stronger baseline, trained on more data, and the low robustness of linguistic tools in the annotation of the noisy, synthetic data sets. Both our baseline neural MT systems and the systems with linguistic features substantially outperform phrase-based and syntax-based systems for both translation directions.",
"In the previous tables, we have reported the best perplexity. To address the question about the randomness in perplexity, and whether the best perplexity just happened to be lower for the systems with linguistic features, we show perplexity on our development set as a function of training time for different systems (Figure FIGREF35 ). We can see that perplexity is consistently lower for the systems trained with linguistic features.",
"Table TABREF36 shows results for a lower-resourced language pair, English INLINEFORM0 Romanian. With linguistic features, we observe improvements of 1.0 Bleu over the baseline, both for the systems trained on parallel data only (23.8 INLINEFORM1 24.8), and the systems which use synthetic training data (28.2 INLINEFORM2 29.2). According to Bleu, the best submission to WMT16 was a system combination by qt21syscomb2016. Our best system is competitive with this submission.",
"Table TABREF37 shows translation examples of our baseline, and the system augmented with linguistic features. We see that the augmented neural MT systems, in contrast to the respective baselines, successfully resolve the reordering for the German INLINEFORM0 English example, and the disambiguation of close for the English INLINEFORM1 German example."
],
[
"Linguistic features have been used in neural language modelling BIBREF4 , and are also used in other tasks for which neural models have recently been employed, such as syntactic parsing BIBREF7 . This paper addresses the question whether linguistic features on the source side are beneficial for neural machine translation. On the target side, linguistic features are harder to obtain for a generation task such as machine translation, since this would require incremental parsing of the hypotheses at test time, and this is possible future work.",
"Among others, our model incorporates information from a dependency annotation, but is still a sequence-to-sequence model. 2016arXiv160306075E propose a tree-to-sequence model whose encoder computes vector representations for each phrase in the source tree. Their focus is on exploiting the (unlabelled) structure of a syntactic annotation, whereas we are focused on the disambiguation power of the functional dependency labels.",
"Factored translation models are often used in phrase-based SMT BIBREF21 as a means to incorporate extra linguistic information. However, neural MT can provide a much more flexible mechanism for adding such information. Because phrase-based models cannot easily generalize to new feature combinations, the individual models either treat each feature combination as an atomic unit, resulting in data sparsity, or assume independence between features, for instance by having separate language models for words and POS tags. In contrast, we exploit the strong generalization ability of neural networks, and expect that even new feature combinations, e.g. a word that appears in a novel syntactic function, are handled gracefully.",
"One could consider the lemmatized representation of the input as a second source text, and perform multi-source translation BIBREF22 . The main technical difference is that in our approach, the encoder and attention layers are shared between features, which we deem appropriate for the types of features that we tested."
],
[
"In this paper we investigate whether linguistic input features are beneficial to neural machine translation, and our empirical evidence suggests that this is the case.",
"We describe a generalization of the encoder in the popular attentional encoder-decoder architecture for neural machine translation that allows for the inclusion of an arbitrary number of input features. We empirically test the inclusion of various linguistic features, including lemmas, part-of-speech tags, syntactic dependency labels, and morphological features, into English INLINEFORM0 German, and English INLINEFORM1 Romanian neural MT systems. Our experiments show that the linguistic features yield improvements over our baseline, resulting in improvements on newstest2016 of 1.5 Bleu for German INLINEFORM2 English, 0.6 Bleu for English INLINEFORM3 German, and 1.0 Bleu for English INLINEFORM4 Romanian.",
"In the future, we expect several developments that will shed more light on the usefulness of linguistic (or other) input features, and whether they will establish themselves as a core component of neural machine translation. On the one hand, the machine learning capability of neural architectures is likely to increase, decreasing the benefit provided by the features we tested. On the other hand, there is potential to explore the inclusion of novel features for neural MT, which might prove to be even more helpful than the ones we investigated, and the features we investigated may prove especially helpful for some translation settings, such as very low-resourced settings and/or translation settings with a highly inflected source language."
],
[
"This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements 645452 (QT21), and 644402 (HimL)."
]
],
"section_name": [
"Introduction",
"Neural Machine Translation",
"Adding Input Features",
"Linguistic Input Features",
"Lemma",
"Subword Tags",
"Morphological Features",
"POS Tags and Dependency Labels",
"On Using Word-level Features in a Subword Model",
"Evaluation",
"Results",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"5d1dee3f7c481b0871642d7e1b08b72aa5c9158c",
"e18c8cb0301b6948e78b81da596afa18a97c0b34"
],
"answer": [
{
"evidence": [
"For German INLINEFORM0 English, the parser annotates the German input with morphological features. Different word types have different sets of features – for instance, nouns have case, number and gender, while verbs have person, number, tense and aspect – and features may be underspecified. We treat the concatenation of all morphological features of a word, using a special symbol for underspecified features, as a string, and treat each such string as a separate feature value."
],
"extractive_spans": [
"case",
"number",
"gender",
"person",
"tense",
"aspect"
],
"free_form_answer": "",
"highlighted_evidence": [
"For German INLINEFORM0 English, the parser annotates the German input with morphological features. Different word types have different sets of features – for instance, nouns have case, number and gender, while verbs have person, number, tense and aspect – and features may be underspecified. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For German INLINEFORM0 English, the parser annotates the German input with morphological features. Different word types have different sets of features – for instance, nouns have case, number and gender, while verbs have person, number, tense and aspect – and features may be underspecified. We treat the concatenation of all morphological features of a word, using a special symbol for underspecified features, as a string, and treat each such string as a separate feature value."
],
"extractive_spans": [
"nouns have case, number and gender",
"verbs have person, number, tense and aspect",
"features may be underspecified"
],
"free_form_answer": "",
"highlighted_evidence": [
"Different word types have different sets of features – for instance, nouns have case, number and gender, while verbs have person, number, tense and aspect – and features may be underspecified."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"145b5cd3a082d525acf900e602fa131ae9de1d6d",
"5ea9291345737de084476e494246fa6057591dcc"
],
"answer": [
{
"evidence": [
"The decoder is a recurrent neural network that predicts a target sequence INLINEFORM0 . Each word INLINEFORM1 is predicted based on a recurrent hidden state INLINEFORM2 , the previously predicted word INLINEFORM3 , and a context vector INLINEFORM4 . INLINEFORM5 is computed as a weighted sum of the annotations INLINEFORM6 . The weight of each annotation INLINEFORM7 is computed through an alignment model INLINEFORM8 , which models the probability that INLINEFORM9 is aligned to INLINEFORM10 . The alignment model is a single-layer feedforward neural network that is learned jointly with the rest of the network through backpropagation."
],
"extractive_spans": [],
"free_form_answer": "Generalized attention",
"highlighted_evidence": [
"The weight of each annotation INLINEFORM7 is computed through an alignment model INLINEFORM8 , which models the probability that INLINEFORM9 is aligned to INLINEFORM10 . The alignment model is a single-layer feedforward neural network that is learned jointly with the rest of the network through backpropagation.",
"shared attention layers"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The decoder is a recurrent neural network that predicts a target sequence INLINEFORM0 . Each word INLINEFORM1 is predicted based on a recurrent hidden state INLINEFORM2 , the previously predicted word INLINEFORM3 , and a context vector INLINEFORM4 . INLINEFORM5 is computed as a weighted sum of the annotations INLINEFORM6 . The weight of each annotation INLINEFORM7 is computed through an alignment model INLINEFORM8 , which models the probability that INLINEFORM9 is aligned to INLINEFORM10 . The alignment model is a single-layer feedforward neural network that is learned jointly with the rest of the network through backpropagation."
],
"extractive_spans": [
"weighted sum of the annotations"
],
"free_form_answer": "",
"highlighted_evidence": [
"The decoder is a recurrent neural network that predicts a target sequence INLINEFORM0 . Each word INLINEFORM1 is predicted based on a recurrent hidden state INLINEFORM2 , the previously predicted word INLINEFORM3 , and a context vector INLINEFORM4 . INLINEFORM5 is computed as a weighted sum of the annotations INLINEFORM6 . The weight of each annotation INLINEFORM7 is computed through an alignment model INLINEFORM8 , which models the probability that INLINEFORM9 is aligned to INLINEFORM10 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What morphological features are considered?",
"What type of attention do they use in the decoder?"
],
"question_id": [
"2686e8d51caff9a19684e0c9984bcb5a1937d08d",
"df623717255ea2c9e0f846859d8a9ef51dc1102b"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Original dependency tree for sentence Leonidas begged in the arena ., and our feature representation after BPE segmentation.",
"Table 1: Vocabulary size, and size of embedding layer of linguistic features, in system that includes all features, and contrastive experiments that add a single feature over the baseline. The embedding layer size of the word feature is set to bring the total size to 500.",
"Table 3: Contrastive experiments with individual linguistic features: best perplexity on dev (newstest2013), and BLEU and CHRF3 on test15 (newstest2015) and test16 (newstest2016). BLEU scores that are significantly different (p < 0.05) from respective baseline are marked with (*).",
"Table 2: German↔English translation results: best perplexity on dev (newstest2013), and BLEU and CHRF3 on test15 (newstest2015) and test16 (newstest2016). BLEU scores that are significantly different (p < 0.05) from respective baseline are marked with (*).",
"Table 4: German↔English translation results with additional, synthetic training data: best perplexity on dev (newstest2013), and BLEU and CHRF3 on test15 (newstest2015) and test16 (newstest2016). BLEU scores that are significantly different (p < 0.05) from respective baseline are marked with (*).",
"Table 6: Translation examples illustrating the effect of adding linguistic input features.",
"Figure 2: English→German (black) and German→English (red) development set perplexity as a function of training time (number of minibatches) with and without linguistic features.",
"Table 5: English→Romanian translation results: best perplexity on newsdev2016, and BLEU and CHRF3 on newstest2016. BLEU scores that are significantly different (p < 0.05) from respective baseline are marked with (*)."
],
"file": [
"5-Figure1-1.png",
"5-Table1-1.png",
"7-Table3-1.png",
"7-Table2-1.png",
"7-Table4-1.png",
"8-Table6-1.png",
"8-Figure2-1.png",
"8-Table5-1.png"
]
} | [
"What type of attention do they use in the decoder?"
] | [
[
"1606.02892-Neural Machine Translation-3"
]
] | [
"Generalized attention"
] | 330 |
1808.09716 | What can we learn from Semantic Tagging? | We investigate the effects of multi-task learning using the recently introduced task of semantic tagging. We employ semantic tagging as an auxiliary task for three different NLP tasks: part-of-speech tagging, Universal Dependency parsing, and Natural Language Inference. We compare full neural network sharing, partial neural network sharing, and what we term the learning what to share setting where negative transfer between tasks is less likely. Our findings show considerable improvements for all tasks, particularly in the learning what to share setting, which shows consistent gains across all tasks. | {
"paragraphs": [
[
"Multi-task learning (MTL) is a recently resurgent approach to machine learning in which multiple tasks are simultaneously learned. By optimising the multiple loss functions of related tasks at once, multi-task learning models can achieve superior results compared to models trained on a single task. The key principle is summarized by BIBREF0 as “MTL improves generalization by leveraging the domain-specific information contained in the training signals of related tasks\". Neural MTL has become an increasingly successful approach by exploiting similarities between Natural Language Processing (NLP) tasks BIBREF1 , BIBREF2 , BIBREF3 . Our work builds upon BIBREF4 , who demonstrate that employing semantic tagging as an auxiliary task for Universal Dependency BIBREF5 part-of-speech tagging can lead to improved performance.",
"The objective of this paper is to investigate whether learning to predict lexical semantic categories can be beneficial to other NLP tasks. To achieve this we augment single-task models (ST) with an additional classifier to predict semantic tags and jointly optimize for both the original task and the auxiliary semantic tagging task. Our hypothesis is that learning to predict semantic tags as an auxiliary task can improve performance of single-task systems. We believe that this is, among other factors, due to the following:",
"We test our hypothesis on three disparate NLP tasks: (i) Universal Dependency part-of-speech tagging (UPOS), (ii) Universal Dependency parsing (UD DEP), a complex syntactic task; and (iii) Natural Language Inference (NLI), a complex task requiring deep natural language understanding."
],
[
"Semantic tagging BIBREF4 , BIBREF7 is the task of assigning language-neutral semantic categories to words. It is designed to overcome a lack of semantic information syntax-oriented part-of-speech tagsets, such as the Penn Treebank tagset BIBREF8 , usually have. Such tagsets exclude important semantic distinctions, such as negation and modals, types of quantification, named entity types, and the contribution of verbs to tense, aspect, or event.",
"The semantic tagset is language-neutral, abstracts over part-of-speech and named-entity classes, and includes fine-grained semantic information. The tagset consists of 80 semantic tags grouped in 13 coarse-grained classes. The tagset originated in the Parallel Meaning Bank (PMB) project BIBREF9 , where it contributes to compositional semantics and cross-lingual projection of semantic representations. Recent work has highlighted the utility of the tagset as a conduit for evaluating the semantics captured by vector representations BIBREF10 , or employed it in an auxiliary tagging task BIBREF4 , as we do in this work."
],
[
"Recently, there has been an increasing interest in the development of models which are trained to learn what to (and what not to) share between a set of tasks, with the general aim of preventing negative transfer when the tasks are not closely related BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Our Learning What to Share setting is based on this idea and closely related to BIBREF15 's shared layer architecture.",
"Specifically, a layer $\\vec{h}_{X}$ which is shared between the main task and the auxiliary task is split into two subspaces: a shared subspace $\\vec{h}_{X_{S}}$ and a private subspace $\\vec{h}_{X_{P}}$ . The interaction between the shared subspaces is modulated via a sigmoidal gating unit applied to a set of learned weights, as seen in Equations ( 9 ) and () where $\\vec{h}_{X_{S(main)}}$ and $\\vec{h}_{X_{S(aux)}}$ are the main and auxiliary tasks' shared layers, $W_{a\\rightarrow m}$ and $W_{m\\rightarrow a}$ are learned weights, and $\\sigma $ is a sigmoidal function. ",
"$$\\vec{h}_{X_{S(main)}} &= \\vec{h}_{X_{S(main)}} \\sigma (\\vec{h}_{X_{S(aux)}} W_{a\\rightarrow m})\\\\\n\\vec{h}_{X_{S(aux)}} &= \\vec{h}_{X_{S(aux)}} \\sigma (\\vec{h}_{X_{S(main)}} W_{m\\rightarrow a})$$ (Eq. 9) ",
"Unlike BIBREF15 's Shared-Layer Architecture, in our setup each task has its own shared subspace rather than one common shared layer. This enables the sharing of different parameters in each direction (i.e., from main to auxiliary task and from auxiliary to main task), allowing each task to choose what to learn from the other, rather than having “one shared layer to capture the shared information for all the tasks” as in BIBREF15 ."
],
[
"We implement three neural MTL settings, shown in Figure 1 . They differ in the way the network's parameters are shared between the tasks:"
],
[
"In the UPOS tagging experiments, we utilize the UD 2.0 English corpus BIBREF16 for the POS tagging and the semantically tagged PMB release 0.1.0 (sem-PMB) for the MTL settings. Note that there is no overlap between the two datasets. Conversely, for the UD DEP and NLI experiments there is a complete overlap between the datasets of main and auxiliary tasks, i.e., each instance is labeled with both the main task's labels and semantic tags. We use the Stanford POS Tagger BIBREF17 trained on sem-PMB to tag the UD corpus and NLI datasets with semantic tags, and then use those assigned tags for the MTL settings of our dependency parsing and NLI models. We find that this approach leads to better results when the main task is only loosely related to the auxiliary task. The UD DEP experiments use the English UD 2.0 corpus, and the NLI experiments use the SNLI BIBREF18 and SICK-E datasets BIBREF19 . The provided train, development, and test splits are used for all datasets. For sem-PMB, the silver and gold parts are used for training and testing respectively."
],
[
"We run four experiments for each of the four tasks (UPOS, UD DEP, SNLI, SICK-E), one using the ST model and one for each of the three MTL settings. Each experiment is run five times, and the average of the five runs is reported. We briefly describe the ST models and refer the reader to the original work for further details due to a lack of space. For reproducibility, detailed diagrams of the MTL models for each task and their hyperparameters can be found in Appendix \"MTL setting Diagrams, Preprocessing, and Hyperparameters\" ."
],
[
"Our tagging model uses a basic contextual one-layer bi-LSTM BIBREF20 that takes in word embeddings and produces a sequence of recurrent states which can be viewed as contextualized representations. The recurrent $r_n$ state from the bi-LSTM corresponding to each time-step $t_n$ is passed through a dense layer with a softmax activation to predict the token's tag.",
"In each of the MTL settings a softmax classifier is added to predict a token's semantic tag and the model is then jointly trained on the concatenation of the sem-PMB and UPOS tagging data to minimize the sum of softmax cross-entropy losses of both the main (UPOS tagging) and auxiliary (semantic tagging) tasks."
],
[
"We employ a parsing model that is based on BIBREF21 BIBREF21 . The model's embeddings layer is a concatenation of randomly initialized word embeddings and character-based word representations added to pre-trained word embeddings, which are passed through a 4-layer stacked bi-LSTM. Unlike BIBREF21 , our model jointly learns to perform UPOS tagging and parsing, instead of treating them as separate tasks. Therefore, instead of tag embeddings, we add a softmax classifier to predict UPOS tags after the first bi-LSTM layer. The outputs from that layer and the UPOS softmax prediction vectors are both concatenated to the original embedding layer and passed to the second bi-LSTM layer. The output of the last bi-LSTM is then used as input for four dense layers with a ReLU activation, producing four vector representations: a word as a dependent seeking its head; a word as a head seeking all its dependents; a word as a dependent deciding on its label; a word as head deciding on the labels of its dependents. These representations are then passed to biaffine and affine softmax classifiers to produce a fully-connected labeled probabilistic dependency graph BIBREF21 . Finally, a non-projective maximum spanning tree parsing algorithm BIBREF22 , BIBREF23 is used to obtain a well-formed dependency tree.",
"Similarly to UPOS tagging, an additional softmax classifier is used to predict a token's semantic tag in each of the MTL settings, as both tasks are jointly learned. In the FSN setting, the 4-layer stacked bi-LSTM is entirely shared. In the PSN setting the semantic tags are predicted from the second layer's hidden states, and the final two layers are devoted to the parsing task. In the LWS setting, the first two layers of the bi-LSTM are split into a private bi-LSTM $_{private}$ and a shared bi-LSTM $_{shared}$ for each of the tasks with the interaction between the shared subspaces being modulated via a gating unit. Then, two bi-LSTM layers that are devoted to parsing only are stacked on top."
],
[
"We base our NLI model on BIBREF25 's Enhanced Sequential Inference Model which uses a bi-LSTM to encode the premise and hypothesis, computes a soft-alignment between premise and hypothesis' representations using an attention mechanism, and employs an inference composition bi-LSTM to compose local inference information sequentially. The MTL settings are implemented by adding a softmax classifier to predict semantic tags at the level of the encoding bi-LSTM, with rest of the model unaltered. In the FSN setting, the hidden states of the encoding bi-LSTM are directly passed as input to the softmax classifier. In the PSN setting an earlier bi-LSTM layer is used to predict the semantic tags and the output from that is passed on to the encoding bi-LSTM which is stacked on top. This follows BIBREF26 's hierarchical approach. In the LWS setting, a bi-LSTM layer with private and shared subspaces is used for semantic tagging and for the ESIM model's encoding layer. In all MTL settings, the bi-LSTM used for semantic tagging is pre-trained on the sem-PMB data."
],
[
"Results for all tasks are shown in Table 1 . In line with BIBREF4 's findings, the FSN setting leads to an improvement for UPOS tagging. POS tagging, a sequence labeling task, can be seen as the most closely related to semantic tagging, therefore negative transfer is minimal and the full sharing of parameters is beneficial. Surprisingly, the FSN setting also leads improvements for UD DEP. Indeed, for UD DEP, all of the MTL models outperform the ST model by increasing margins. For the NLI tasks, however, there is a clear degradation in performance.",
"The PSN setting shows mixed results and does not show a clear advantage over FSN for UPOS and UD DEP. This suggests that adding task-specific layers after fully-shared ones does not always enable sufficient task specialization. For the NLI tasks however, PSN is clearly preferable to FSN, especially for the small-sized SICK-E dataset where the FSN model fails to adequately learn.",
"As a sentence-level task, NLI is functionally dissimilar to semantic tagging. However, it is a task which requires deep understanding of natural language semantics and can therefore conceivably benefit from the signal provided by semantic tagging. Our results demonstrate that it is possible to leverage this signal given a selective sharing setup where negative transfer can be minimized. Indeed, for the NLI tasks, only the LWS setting leads to improvements over the ST models. The improvement is larger for the SICK-E task which has a much smaller training set and therefore stands to learn more from the semantic tagging signal. For all tasks, it can be observed that the LWS models outperform the rest of the models. This is in line with our expectations with the findings from previous work BIBREF12 , BIBREF15 that selective sharing outperforms full network and partial network sharing."
],
[
"In addition to evaluating performance directly, we attempt to qualify how semtags affect performance with respect to each of the SNLI MTL settings."
],
[
"The fact that NLI is a sentence-level task, while semantic tags are word-level annotations presents a difficulty in measuring the effect of semantic tags on the systems' performance, as there is no one-to-one correspondence between a correct label and a particular semantic tag. We therefore employ the following method in order to assess the contribution of semantic tags. Given the performance ranking of all our systems — $FSN < ST < PSN < LWS$ — we make a pairwise comparison between the output of a superior system $S_{sup}$ and an inferior system $S_{inf}$ . This involves taking the pairs of sentences that every $S_{sup}$ classifies correctly, but some $S_{inf}$ does not. Given that FSN is the worst performing system and, as such, has no `worse' system for comparison, we are left with six sets of sentences: ST-FSN, PSN-FSN, PSN-ST, LWS-PSN, LWS-ST, and LWS-FSN. To gain insight as to where a given system $S_{sup}$ performs better than a given $S_{inf}$ , we then sort each comparison sentence set by the frequency of semtags predicted therein, which are normalized by dividing by their frequency in the full SNLI test set.",
"We notice interesting patterns, visible in Figure 2 . Specifically, PSN appears markedly better at sentences with named entities (ART, PER, GEO, ORG) and temporal entities (DOM) than both ST and the FSN. Marginal improvements are also observed for sentences with negation and reflexive pronouns. The LWS setting continues this pattern, with additional improvements observable for sentences with the HAP tag for names of events, SST for subsective attributes, and the ROL tag for role nouns."
],
[
"To assess the contribution of the semantic tagging auxiliary task independent of model architecture and complexity we run three additional SNLI experiments — one for each MTL setting — where the model architectures are unchanged but the auxiliary tasks are assigned no weight (i.e. do not affect the learning). The results confirm our previous findings that, for NLI, the semantic tagging auxiliary task only improves performance in a selective sharing setting, and hurts it otherwise: i) the FSN system which had performed below ST improves to equal it and ii) the PSN and LWS settings both see a drop to ST-level performance."
],
[
"We present a comprehensive evaluation of MTL using a recently proposed task of semantic tagging as an auxiliary task. Our experiments span three types of NLP tasks and three MTL settings. The results of the experiments show that employing semantic tagging as an auxiliary task leads to improvements in performance for UPOS tagging and UD DEP in all MTL settings. For the SNLI tasks, requiring understanding of phrasal semantics, the selective sharing setup we term Learning What to Share holds a clear advantage. Our work offers a generalizable framework for the evaluation of the utility of an auxiliary task."
],
[
"fig:upos shows the three MTL models used for UPOS. All hyperparameters were tuned with respect to loss on the English UD 2.0 UPOS validation set. We trained for 20 epochs with a batch size of 128 and optimized using Adam BIBREF27 with a learning rate of $0.0001$ . We weight the auxiliary semantic tagging loss with $\\lambda $ = $0.1$ . The pre-trained word embeddings we used are GloVe embeddings BIBREF28 of dimension 100 trained on 6 billion tokens of Wikipedia 2014 and Gigaword 5. We applied dropout and recurrent dropout with a probability of $0.3$ to all bi-LSTMs."
],
[
"fig:dep shows the three MTL models for UD DEP. We use the gold tokenization. All hyperparameters were tuned with respect to loss on the English UD 2.0 UD validation set. We trained for 15 epochs with a batch size of 50 and optimized using Adam with a learning rate of $2e-3$ . We weight the auxiliary semantic tagging loss with $\\lambda $ = $0.5$ . The pre-trained word embeddings we use are GloVe embeddings of dimension 100 trained on 6 billion tokens of Wikipedia 2014 and Gigaword 5. We applied dropout with a probability of $0.33$ to all bi-LSTM, embedding layers, and non-output dense layers."
],
[
"fig:nli shows the three MTL models for NLI. All hyperparameters were tuned with respect to loss on the SNLI and SICK-E validation datasets (separately). For the SNLI experiments, we trained for 37 epochs with a batch size of 128. For the SICK-E experiments, we trained for 20 epochs with a batch size of 8. Note that the ESIM model was designed for the SNLI dataset, therefore performance is non-optimal for SICK-E. For both sets of experiments: we optimized using Adam with a learning rate of $0.00005$ ; we weight the auxiliary semantic tagging loss with $\\lambda $ = $0.1$ ; the pre-trained word embeddings we use are GloVe embeddings of dimension 300 trained on 840 billion tokens of Common Crawl; and we applied dropout and recurrent dropout with a probability of $0.3$ to all bi-LSTM, and non-output dense layers."
],
[
"tab:examples shows demonstrative examples from the SNLI test set on which the Learning What to Share (LWS) model outperforms the single-task (ST) model. The examples cover all possible combinations of entailment classes. tab:semtags explains the relevant part of the semantic tagset. tab:fscore shows the per-label precision and recall scores."
]
],
"section_name": [
"Introduction",
"Semantic Tagging",
"Learning What to Share",
"Multi-Task Learning Settings",
"Data",
"Experiments",
"Universal Dependency POS Tagging",
"Universal Dependency Parsing",
"Natural Language Inference",
"Results and Discussion",
"Analysis",
"Qualitative analyses",
"Contribution of semantic tagging",
"Conclusions",
"UPOS Tagging",
"UD DEP",
"NLI",
"SNLI model output analysis"
]
} | {
"answers": [
{
"annotation_id": [
"147c25fbacee9320fc172b060b0bf4656d60fcd4",
"9ce7d51e5398992d43b1b8068a3efd183d25a50f"
],
"answer": [
{
"evidence": [
"The semantic tagset is language-neutral, abstracts over part-of-speech and named-entity classes, and includes fine-grained semantic information. The tagset consists of 80 semantic tags grouped in 13 coarse-grained classes. The tagset originated in the Parallel Meaning Bank (PMB) project BIBREF9 , where it contributes to compositional semantics and cross-lingual projection of semantic representations. Recent work has highlighted the utility of the tagset as a conduit for evaluating the semantics captured by vector representations BIBREF10 , or employed it in an auxiliary tagging task BIBREF4 , as we do in this work.",
"FLOAT SELECTED: Table 3: The list of semantic tags found in Table 2."
],
"extractive_spans": [],
"free_form_answer": "Tags categories ranging from anaphoric (definite, possessive pronoun), attribute (colour, concrete quantity, intersective, relation), unnamed entity (concept), logical (alternative, disjunction), discourse (subordinate relation), events (present simple, past simple), etc.",
"highlighted_evidence": [
"The semantic tagset is language-neutral, abstracts over part-of-speech and named-entity classes, and includes fine-grained semantic information. The tagset consists of 80 semantic tags grouped in 13 coarse-grained classes. The tagset originated in the Parallel Meaning Bank (PMB) project BIBREF9 , where it contributes to compositional semantics and cross-lingual projection of semantic representations.",
"FLOAT SELECTED: Table 3: The list of semantic tags found in Table 2."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The semantic tagset is language-neutral, abstracts over part-of-speech and named-entity classes, and includes fine-grained semantic information. The tagset consists of 80 semantic tags grouped in 13 coarse-grained classes. The tagset originated in the Parallel Meaning Bank (PMB) project BIBREF9 , where it contributes to compositional semantics and cross-lingual projection of semantic representations. Recent work has highlighted the utility of the tagset as a conduit for evaluating the semantics captured by vector representations BIBREF10 , or employed it in an auxiliary tagging task BIBREF4 , as we do in this work."
],
"extractive_spans": [
"tagset originated in the Parallel Meaning Bank (PMB) project BIBREF9"
],
"free_form_answer": "",
"highlighted_evidence": [
"The tagset consists of 80 semantic tags grouped in 13 coarse-grained classes. The tagset originated in the Parallel Meaning Bank (PMB) project BIBREF9 , where it contributes to compositional semantics and cross-lingual projection of semantic representations"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"eca216170c00be9528a4f86abcb3ffe7115a9be2"
]
},
{
"annotation_id": [
"c0cec918c717e87a5d8023b062b83a074ba6b8fa",
"c3b3c07da00945dbffb40fa9fc1ee5583127b1e5"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Results for single-task models (ST), fullyshared networks (FSN), partially-shared networks (PSN), and learning what to share (LWS). All scores are reported as accuracy, except UD DEP for which we report LAS/UAS F1 score."
],
"extractive_spans": [],
"free_form_answer": "0.5 improvement with LWS over the single-task model",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results for single-task models (ST), fullyshared networks (FSN), partially-shared networks (PSN), and learning what to share (LWS). All scores are reported as accuracy, except UD DEP for which we report LAS/UAS F1 score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Results for single-task models (ST), fullyshared networks (FSN), partially-shared networks (PSN), and learning what to share (LWS). All scores are reported as accuracy, except UD DEP for which we report LAS/UAS F1 score."
],
"extractive_spans": [],
"free_form_answer": "Accuracy: SNLI - .5, SICK-E - 3.27",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results for single-task models (ST), fullyshared networks (FSN), partially-shared networks (PSN), and learning what to share (LWS). All scores are reported as accuracy, except UD DEP for which we report LAS/UAS F1 score."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"eca216170c00be9528a4f86abcb3ffe7115a9be2"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What set of semantic tags did they use?",
"How much improvement did they see on the NLI task?"
],
"question_id": [
"ac482ab8a5c113db7c1e5f106a5070db66e7ba37",
"24897f57e3b0550be1212c0d9ebfcf83bad4164e"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Our three multi-task learning settings: (A) fully shared networks, (B) partially shared networks, and (C) Learning What to Share. Layers are mathematically denoted by vectors and the connections between them, represented by arrows, are mathematically denoted by matrices of weights. S indicates a shared layer, P a private layer, and X a layer with shared and private subspaces.",
"Table 1: Results for single-task models (ST), fullyshared networks (FSN), partially-shared networks (PSN), and learning what to share (LWS). All scores are reported as accuracy, except UD DEP for which we report LAS/UAS F1 score.",
"Figure 2: Normalized semantic tag frequencies for all six sets of sentences. X - Y denotes the set of sentences correctly classified by model X but misclassified by model Y.",
"Figure 3: The three MTL settings for each task. Layers dimensions are displayed in brackets.",
"Table 3: The list of semantic tags found in Table 2.",
"Table 4: Per-label precision (left) and recall (right) for all models.",
"Table 2: Examples of the entailment problems from SNLI which are incorrectly classified by the ST model but correctly classified by the LWS model. Automatically assigned semantic tags are in superscript."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"5-Figure2-1.png",
"8-Figure3-1.png",
"9-Table3-1.png",
"9-Table4-1.png",
"9-Table2-1.png"
]
} | [
"What set of semantic tags did they use?",
"How much improvement did they see on the NLI task?"
] | [
[
"1808.09716-Semantic Tagging-1",
"1808.09716-9-Table3-1.png"
],
[
"1808.09716-4-Table1-1.png"
]
] | [
"Tags categories ranging from anaphoric (definite, possessive pronoun), attribute (colour, concrete quantity, intersective, relation), unnamed entity (concept), logical (alternative, disjunction), discourse (subordinate relation), events (present simple, past simple), etc.",
"Accuracy: SNLI - .5, SICK-E - 3.27"
] | 331 |
2002.10210 | Learning to Select Bi-Aspect Information for Document-Scale Text Content Manipulation | In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content. In detail, the input is a set of structured records and a reference text for describing another recordset. The output is a summary that accurately describes the partial content in the source recordset with the same writing style of the reference. The task is unsupervised due to lack of parallel data, and is challenging to select suitable records and style words from bi-aspect inputs respectively and generate a high-fidelity long document. To tackle those problems, we first build a dataset based on a basketball game report corpus as our testbed, and present an unsupervised neural model with interactive attention mechanism, which is used for learning the semantic relationship between records and reference texts to achieve better content transfer and better style preservation. In addition, we also explore the effectiveness of the back-translation in our task for constructing some pseudo-training pairs. Empirical results show superiority of our approaches over competitive methods, and the models also yield a new state-of-the-art result on a sentence-level dataset. | {
"paragraphs": [
[
"Data-to-text generation is an effective way to solve data overload, especially with the development of sensor and data storage technologies, which have rapidly increased the amount of data produced in various fields such as weather, finance, medicine and sports BIBREF0. However, related methods are mainly focused on content fidelity, ignoring and lacking control over language-rich style attributes BIBREF1. For example, a sports journalist prefers to use some repetitive words when describing different games BIBREF2. It can be more attractive and practical to generate an article with a particular style that is describing the conditioning content.",
"In this paper, we focus on a novel research task in the field of text generation, named document-scale text content manipulation. It is the task of converting contents of a document into another while preserving the content-independent style words. For example, given a set of structured records and a reference report, such as statistical tables for a basketball game and a summary for another game, we aim to automatically select partial items from the given records and describe them with the same writing style (e.g., logical expressions, or wording, transitions) of the reference text to directly generate a new report (Figure 1).",
"In this task, the definition of the text content (e.g., statistical records of a basketball game) is clear, but the text style is vague BIBREF3. It is difficult to construct paired sentences or documents for the task of text content manipulation. Therefore, the majority of existing text editing studies develop controlled generator with unsupervised generation models, such as Variational Auto-Encoders (VAEs) BIBREF4, Generative Adversarial Networks (GANs) BIBREF5 and auto-regressive networks BIBREF6 with additional pre-trained discriminators.",
"Despite the effectiveness of these approaches, it remains challenging to generate a high-fidelity long summary from the inputs. One reason for the difficulty is that the input structured records for document-level generation are complex and redundant to determine which part of the data should be mentioned based on the reference text. Similarly, the model also need to select the suitable style words according to the input records. One straightforward way to address this problem is to use the relevant algorithms in data-to-text generation, such as pre-selector BIBREF7 and content selector BIBREF8. However, these supervised methods cannot be directly transferred considering that we impose an additional goal of preserving the style words, which lacks of parallel data and explicit training objective. In addition, when the generation length is expanded from a sentence to a document, the sentence-level text content manipulation method BIBREF1 can hardly preserve the style word (see case study, Figure 4).",
"In this paper, we present a neural encoder-decoder architecture to deal with document-scale text content manipulation. In the first, we design a powerful hierarchical record encoder to model the structured records. Afterwards, instead of modeling records and reference summary as two independent modules BIBREF1, we create fusion representations of records and reference words by an interactive attention mechanism. It can capture the semantic relatedness of the source records with the reference text to enable the system with the capability of content selection from two different types of inputs. Finally, we incorporate back-translation BIBREF9 into the training procedure to further improve results, which provides an extra training objective for our model.",
"To verify the effectiveness of our text manipulation approaches, we first build a large unsupervised document-level text manipulation dataset, which is extracted from an NBA game report corpus BIBREF10. Experiments of different methods on this new corpus show that our full model achieves 35.02 in Style BLEU and 39.47 F-score in Content Selection, substantially better than baseline methods. Moreover, a comprehensive evaluation with human judgment demonstrates that integrating interactive attention and back-translation could improve the content fidelity and style preservation of summary by a basic text editing model. In the end, we conduct extensive experiments on a sentence-level text manipulation dataset BIBREF1. Empirical results also show that the proposed approach achieves a new state-of-the-art result."
],
[
"Our goal is to automatically select partial items from the given content and describe them with the same writing style of the reference text. As illustrated in Figure 1, each input instance consists of a statistical table $x$ and a reference summary $y^{\\prime }$. We regard each cell in the table as a record $r=\\lbrace r_{o}\\rbrace _{o=1}^{L_x}$, where $L_x$ is the number of records in table $x$. Each record $r$ consists of four types of information including entity $r.e$ (the name of team or player, such as LA Lakers or Lebron James), type $r.t$ (the types of team or player, e.g., points, assists or rebounds) and value $r.v$ (the value of a certain player or team on a certain type), as well as feature $r.f$ (e.g., home or visiting) which indicates whether a player or a team compete in home court or not. In practice, each player or team takes one row in the table and each column contains a type of record such as points, assists, etc. The reference summary or report consists of multiple sentences, which are assumed to describe content that has the same types but different entities and values with that of the table $x$.",
"Furthermore, following the same setting in sentence-level text content manipulation BIBREF1, we also provide additional information at training time. For instance, each given table $x$ is paired with a corresponding $y_{aux}$, which was originally written to describe $x$ and each reference summary $y^{\\prime }$ also has its corresponding table $x^{\\prime }$ containing the records information. The additional information can help models to learn the table structure and how the desired records can be expressed in natural language when training. It is worth noting that we do not utilize the side information beyond $(x, y^{\\prime })$ during the testing phase and the task is unsupervised as there is no ground-truth target text for training."
],
[
"In this subsection, we construct a large document-scale text content manipulation dataset as a testbed of our task. The dataset is derived from an NBA game report corpus ROTOWIRE BIBREF10, which consists of 4,821 human written NBA basketball game summaries aligned with their corresponding game tables. In our work, each of the original table-summary pair is treated as a pair of $(x, y_{aux})$, as described in previous subsection. To this end, we design a type-based method for obtaining a suitable reference summary $y^{\\prime }$ via retrieving another table-summary from the training data using $x$ and $y_{aux}$. The retrieved $y^{\\prime }$ contains record types as same as possible with record types contained in $y$. We use an existing information extraction tool BIBREF10 to extract record types from the reference text. Table TABREF3 shows the statistics of constructed document-level dataset and a sentence-level benchmark dataset BIBREF1. We can see that the proposed document-level text manipulation problem is more difficult than sentence-level, both in terms of the complexity of input records and the length of generated text."
],
[
"This section describes the proposed approaches to tackle the document-level problem. We first give an overview of our architecture. Then, we provide detailed formalizations of our model with special emphasize on Hierarchical Record Encoder, Interactive Attention, Decoder and Back-translation."
],
[
"In this section, we present an overview of our model for document-scale text content manipulation, as illustrated in Figure 2. Since there are unaligned training pairs, the model is trained with three competing objectives of reconstructing the auxiliary document $y_{aux}$ based on $x$ and $y^{\\prime }$ (for content fidelity), the reference document $y^{\\prime }$ based on $x^{\\prime }$ and $y^{\\prime }$ (for style preservation), and the reference document $y^{\\prime }$ based on $x^{\\prime }$ and pseudo $z$ (for pseudo training pair). Formally, let $p_{\\theta }=(z|x,y^{\\prime })$ denotes the model that takes in records $x$ and a reference summary $y^{\\prime }$, and generates a summary $z$. Here $\\theta $ is the model parameters. In detail, the model consists of a reference encoder, a record encoder, an interactive attention and a decoder.",
"The first reference encoder is used to extract the representation of reference summary $y^{\\prime }$ by employing a bidirectional-LSTM model BIBREF11. The second record encoder is applied to learn the representation of all records via hierarchical modeling on record-level and row-level. The interactive attention is a co-attention method for learning the semantic relationship between the representation of each record and the representation of each reference word. The decoder is another LSTM model to generate the output summary with a hybrid attention-copy mechanism at each decoding step.",
"Note that we set three goals, namely content fidelity, style preservation and pseudo training pair. Similar to sentence-scale text content manipulation BIBREF1, the first two goals are simultaneous and in a sense competitive with each other (e.g., describing the new designated content would usually change the expressions in reference sentence to some extent). The content fidelity objective $L_{record}(\\theta )$ and style preservation objective $L_{style}(\\theta )$ are descirbed in following equations.",
"The third objective is used for training our system in a true text manipulation setting. We can regard this as an application of the back-translation algorithm in document-scale text content manipulation. Subsection \"Back-translation Objective\" will give more details."
],
[
"We develop a hierarchical table encoder to model game statistical tables on record-level and row-level in this paper. It can model the relatedness of a record with other records in same row and a row (e.g., a player) with other rows (e.g., other players) in same table. As shown in the empirical study (see Table 2), the hierarchical encoder can gain significant improvements compared with the standard MLP-based data-to-text model BIBREF10. Each word and figure are represented as a low dimensional, continuous and real-valued vector, also known as word embedding BIBREF12, BIBREF13. All vectors are stacked in a word embedding matrix $L_w \\in \\mathbb {R}^{d \\times |V|}$, where $d$ is the dimension of the word vector and $|V|$ is the vocabulary size.",
"On record-level, we first concatenate the embedding of record's entity, type, value and feature as an initial representation of the record ${{r_{ij}}} = \\lbrace {r_{ij}.e};{r_{ij}.t};{r_{ij}.v};{r_{ij}.f} \\rbrace \\in \\mathbb {R}^{4d \\times 1} $, where ${i, j}$ denotes a record in the table of $i^{th}$ row and $j^{th}$ column as mentioned in Section 2.1. Afterwards, we employ a bidirectional-LSTM to model records of the same row. For the $i^{th}$ row, we take record $\\lbrace r_{i1}, ...,r_{ij}, ..., r_{iM} \\rbrace $ as input, then obtain record's forward hidden representations $\\lbrace \\overrightarrow{hc_{i1}}, ...,\\overrightarrow{hc_{ij}}, ..., \\overrightarrow{hc_{iM}} \\rbrace $ and backward hidden representations $\\lbrace \\overleftarrow{hc_{i1}}, ...,\\overleftarrow{hc_{ij}}, ..., \\overleftarrow{hc_{iM}} \\rbrace $, where $M$ is the number of columns (the number of types). In the end, we concatenate $\\overrightarrow{hc_{ij}}$ and $\\overleftarrow{hc_{ij}} $ as a final representation of record $r_{ij}$ and concatenate $\\overrightarrow{hc_{iM}}$ and $\\overleftarrow{hc_{i1}}$ as a hidden vector of the $i^{th}$ row.",
"On row-level, the modeled row vectors are fed to another bidirectional-LSTM model to learn the table representation. In the same way, we can obtain row's forward hidden representations $\\lbrace \\overrightarrow{hr_{1}}, ...,\\overrightarrow{hr_{i}}, ..., \\overrightarrow{hr_{N}} \\rbrace $ and backward hidden representations $\\lbrace \\overleftarrow{hr_{1}}, ...,\\overleftarrow{hr_{i}}, ..., \\overleftarrow{hr_{N}} \\rbrace $, where $N$ is the number of rows (the number of entities). And the concatenation of $[\\overrightarrow{hr_{i}}, \\overleftarrow{hr_{i}}]$ is regarded as a final representation of the $i^{th}$ row. An illustration of this network is given in the left dashed box of Figure 3, where the two last hidden vector $\\overrightarrow{hr_{N}}$ and $\\overleftarrow{hr_{1}}$ can be concatenated as the table representation, which is the initial input for the decoder.",
"Meanwhile, a bidirectional-LSTM model is used to encode the reference text $ {w_1, ..., w_K}$ into a set of hidden states $W = [{w.h_1, ..., w.h_K}]$, where $K$ is the length of the reference text and each $w.h_i$ is a $2d$-dimensional vector."
],
[
"We present an interactive attention model that attends to the structured records and reference text simultaneously, and finally fuses both attention context representations. Our work is partially inspired by the successful application of co-attention methods in Reading Comprehension BIBREF14, BIBREF15, BIBREF16 and Natural Language Inference BIBREF17, BIBREF18.",
"As shown in the middle-right dashed box of Figure 3, we first construct the Record Bank as $R= [rc_1,...,rc_o,..., rc_{L_x},] \\in \\mathbb {R}^{2d \\times L_x}$, where $L_x = M \\times N$ is the number of records in Table $x$ and each $rc_o$ is the final representation of record $r_{ij}$, $r_{ij} = [\\overrightarrow{hc_{ij}}, \\overleftarrow{hc_{ij}}]$, as well as the Reference Bank $W$, which is $W = [{w.h_1, ..., w.h_K}] $. Then, we calculate the affinity matrix, which contains affinity scores corresponding to all pairs of structured records and reference words: $L = R^TW \\in \\mathbb {R}^{ L_x \\times K} $. The affinity matrix is normalized row-wise to produce the attention weights $A^W$ across the structured table for each word in the reference text, and column-wise to produce the attention weights $A^R$ across the reference for each record in the Table:",
"Next, we compute the suitable records of the table in light of each word of the reference.",
"We similarly compute the summaries $WA^R$ of the reference in light of each record of the table. Similar to BIBREF14, we also place reference-level attention over the record-level attention by compute the record summaries $C^WA^R$ of the previous attention weights in light of each record of the table. These two operations can be done in parallel, as is shown in Eq. 6.",
"We define $C^R$ as a fusion feature bank, which is an interactive representation of the reference and structured records.",
"In the last, a bidirectional LSTM is used for fusing the relatedness to the interactive features. The output $F = [f_1,..., f_{L_X}] \\in \\mathbb {R}^{ 2d \\times L_x} $, which provides a foundation for selecting which record may be the best suitable content, as fusion feature bank."
],
[
"An illustration of our decoder is shown in the top-right dashed box of Figure 3. We adopt a joint attention model BIBREF19 and a copy mechanism BIBREF20 in our decoding phrase. In particular, our joint attention covers the fusion feature bank, which represents an interactive representation of the input records and reference text. And we refuse the coverage mechanism, which does not satisfy the original intention of content selection in our setting.",
"In detail, we present a flexible copying mechanism which is able to copy contents from table records. The basic idea of the copying mechanism is to copy a word from the table contents as a trade-off of generating a word from target vocabulary via softmax operation. On one hand, we define the probability of copying a word $\\tilde{z}$ from table records at time step $t$ as $g_t(\\tilde{z}) \\odot \\alpha _{(t, id(\\tilde{z}))}$, where $g_t(\\tilde{z})$ is the probability of copying a record from the table, $id(\\tilde{z})$ indicates the record number of $\\tilde{z}$, and $\\alpha _{(t, id(\\tilde{z}))}$ is the attention probability on the $id(\\tilde{z})$-th record. On the other hand, we use $(1 - g_t(\\tilde{z}) ) \\odot \\beta _{(\\tilde{z})}$ as the probability of generating a word $\\tilde{z}$ from the target vocabulary, where $\\beta _{(\\tilde{z})}$ is from the distribution over the target vocabulary via softmax operation. We obtain the final probability of generating a word $\\tilde{z}$ as follows",
"The above model, copies contents only from table records, but not reference words."
],
[
"In order to train our system with a true text manipulation setting, we adapt the back-translation BIBREF9 to our scenario. After we generate text $z$ based on $(x, y^{\\prime })$, we regard $z$ as a new reference text and paired with $x^{\\prime }$ to generate a new text $z^{\\prime }$. Naturally, the golden text of $z^{\\prime }$ is $y^{\\prime }$, which can provide an additional training objective in the training process. Figure 2 provides an illustration of the back-translation, which reconstructs $y^{\\prime }$ given ($x^{\\prime }$, $z$):",
"We call it the back-translation objective. Therefore, our final objective consists of content fidelity objective, style preservation objective and back-translation objective.",
"where $\\lambda _1 $ and $\\lambda _2$ are hyperparameters."
],
[
"In this section, we describe experiment settings and report the experiment results and analysis. We apply our neural models for text manipulation on both document-level and sentence-level datasets, which are detailed in Table 1."
],
[
"We use two-layers LSTMs in all encoders and decoders, and employ attention mechanism BIBREF19. Trainable model parameters are randomly initialized under a Gaussian distribution. We set the hyperparameters empirically based on multiple tries with different settings. We find the following setting to be the best. The dimension of word/feature embedding, encoder hidden state, and decoder hidden state are all set to be 600. We apply dropout at a rate of 0.3. Our training process consists of three parts. In the first, we set $\\lambda _1=0$ and $\\lambda _2=1$ in Eq. 7 and pre-train the model to convergence. We then set $\\lambda _1=0.5$ and $\\lambda _2=0.5$ for the next stage training. Finally, we set $\\lambda _1=0.4$ and $\\lambda _2=0.5$ for full training. Adam is used for parameter optimization with an initial learning rate of 0.001 and decaying rate of 0.97. During testing, we use beam search with beam size of 5. The minimum decoding length is set to be 150 and maximum decoding length is set to be 850.",
"We use the same evaluation metrics employed in BIBREF1. Content Fidelity (CF) is an information extraction (IE) approach used in BIBREF10 to measure model's ability to generate text containing factual records. That is, precision and recall (or number) of unique records extracted from the generated text $z$ via an IE model also appear in source recordset $x$. Style Preservation is used to measure how many stylistic properties of the reference are retained in the generated text. In this paper, we calculate BLEU score between the generated text and the reference to reflect model's ability on style preservation. Furthermore, in order to measure model's ability on content selection, we adopt another IE-based evaluation metric, named Content selection, (CS), which is used for data-to-text generation BIBREF10. It is measured in terms of precision and recall by comparing records in generated text $z$ with records in the auxiliary reference $y_{aux}$.",
"We compare with the following baseline methods on the document-level text manipulation.",
"(1) Rule-based Slot Filling Method (Rule-SF) is a straightforward way for text manipulation. Firstly, It masks the record information $x^{\\prime }$ in the $y^{\\prime }$ and build a mapping between $x$ and $x^{\\prime }$ through their data types. Afterwards, select the suitable records from $x$ to fill in the reference y with masked slots. The method is also used in sentence-level task BIBREF1.",
"(2) Copy-based Slot Filling Method (Copy-SF) is a data-driven slot filling method. It is derived from BIBREF21, which first generates a template text with data slots to be filled and then leverages a delayed copy mechanism to fill in the slots with proper data records.",
"(3) Conditional Copy based Data-To-Text (CCDT) is a classical neural model for data-to-text generation BIBREF10. (4) Hierarchical Encoder for Data-To-Text (HEDT) is also a data-to-text method, which adopts the same hierarchical encoder in our model.",
"(5) Text Manipulation with Table Encoder (TMTE) extends sentence-level text editing method BIBREF1 by equipping a more powerful hierarchical table encoder.",
"(6) Co-attention-based Method (Coatt): a variation of our model by replacing interactive attention with another co-attention model BIBREF22.",
"(7) Ours w/o Interactive Attention (-InterAtt) is our model without interactive attention.",
"(8) Ours w/o Back-translation (-BackT) is also a variation of our model by omitting back-translation loss.",
"In addition, for sentence-level task, we adopt the same baseline methods as the paper BIBREF1, including an attention-based Seq2Seq method with copy mechanism BIBREF23, a rule-based method, two style transfer methods, MAST BIBREF24 and AdvST BIBREF25, as well as their state-of-the-art method, abbreviate as S-SOTA."
],
[
"Document-level text manipulation experimental results are given in Table 2. The first block shows two slot filling methods, which can reach the maximum BLEU (100) after masking out record tokens. It is because that both methods only replace records without modifying other parts of the reference text. Moreover, Copy-SF achieves reasonably good performance on multiple metrics, setting a strong baseline for content fidelity and content selection. For two data-to-text generation methods CCDT and HEDT, the latter one is consistently better than the former, which verifies the proposed hierarchical record encoder is more powerful. However, their Style BLEU scores are particularly low, which demonstrates that direct supervised learning is incapable of controlling the text expression. In comparison, our proposed models achieve better Style BLEU and Content Selection F%. The superior performance of our full model compared to the variant ours-w/o-InterAtt, TMTE and Coatt demonstrates the usefulness of the interactive attention mechanism."
],
[
"In this section, we hired three graduates who passed intermediate English test (College English Test Band 6) and were familiar with NBA games to perform human evaluation. Following BIBREF1, BIBREF26, we presented to annotators five generated summaries, one from our model and four others from comparison methods, such as Rule-SF, Copy-SF, HEDT, TMTE. These students were asked to rank the five summaries by considering “Content Fidelity”, “Style Preservation” and “Fluency” separately. The rank of each aspect ranged from 1 to 5 with the higher score the better and the ranking scores are averaged as the final score. For each study, we evaluated on 50 test instances. From Table 3, we can see that the Content Fidelity and Style Preservation results are highly consistent with the results of the objective evaluation. An exception is that the Fluency of our model is much higher than other methods. One possible reason is that the reference-based generation method is more flexible than template-based methods, and more stable than pure language models on document-level long text generation tasks."
],
[
"To demonstrate the effectiveness of our models on sentence-level text manipulation, we show the results in Table 4. We can see that our full model can still get consistent improvements on sentence-level task over previous state-of-the-art method. Specifically, we observe that interactive attention and back-translation cannot bring a significant gain. This is partially because the input reference and records are relatively simple, which means that they do not require overly complex models for representation learning."
],
[
"Figure 4 shows the generated examples by different models given content records $x$ and reference summary $y^{\\prime }$. We can see that our full model can manipulate the reference style words more accurately to express the new records. Whereas four generations seem to be fluent, the summary of Rule-SF includes logical erroneous sentences colored in orange. It shows a common sense error that Davis was injured again when he had left the stadium with an injury. This is because although the rule-based method has the most style words, they cannot be modified, which makes these style expressions illogical. An important discovery is that the sentence-level text content manipulation model TMTE fails to generate the style words similar to the reference summary. The reason is that TMTE has no interactive attention module unlike our model, which models the semantic relationship between records and reference words and therefore accurately select the suitable information from bi-aspect inputs. However, when expressions such as parallel structures are used, our model generates erroneous expressions as illustrated by the description about Anthony Davis's records “20 points, 12 rebounds, one steals and two blocks in 42 minutes”."
],
[
"Recently, text style transfer and controlled text generation have been widely studied BIBREF27, BIBREF26, BIBREF25, BIBREF28. They mainly focus on generating realistic sentences, whose attributes can be controlled by learning disentangled latent representations. Our work differs from those in that: (1) we present a document-level text manipulation task rather than sentence-level. (2) The style attributes in our task is the textual expression of a given reference document. (3) Besides text representation learning, we also need to model structured records in our task and do content selection. Particularly, our task can be regard as an extension of sentence-level text content manipulation BIBREF1, which assumes an existing sentence to provide the source of style and structured records as another input. It takes into account the semantic relationship between records and reference words and experiment results verify the effectiveness of this improvement on both document- and sentence-level datasets.",
"Furthermore, our work is similar but different from data-to-text generation studies BIBREF7, BIBREF29, BIBREF30, BIBREF31, BIBREF8, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36. This series of work focuses on generating more accurate descriptions of given data, rather than studying the writing content of control output. Our task takes a step forward to simultaneously selecting desired content and depending on specific reference text style. Moreover, our task is more challenging due to its unsupervised setting. Nevertheless, their structured table modeling methods and data selection mechanisms can be used in our task. For example, BIBREF10 develops a MLP-based table encoder. BIBREF21 presents a two-stage approach with a delayed copy mechanism, which is also used as a part of our automatic slot filling baseline model."
],
[
"In this paper, we first introduce a new yet practical problem, named document-level text content manipulation, which aims to express given structured recordset with a paragraph text and mimic the writing style of a reference text. Afterwards, we construct a corresponding dataset and develop a neural model for this task with hierarchical record encoder and interactive attention mechanism. In addition, we optimize the previous training strategy with back-translation. Finally, empirical results verify that the presented approaches perform substantively better than several popular data-to-text generation and style transfer methods on both constructed document-level dataset and a sentence-level dataset. In the future, we plan to integrate neural-based retrieval methods into our model for further improving results."
],
[
"Bing Qin is the corresponding author of this work. This work was supported by the National Key R&D Program of China (No. 2018YFB1005103), National Natural Science Foundation of China (No. 61906053) and Natural Science Foundation of Heilongjiang Province of China (No. YQ2019F008)."
]
],
"section_name": [
"Introduction",
"Preliminaries ::: Problem Statement",
"Preliminaries ::: Document-scale Data Collection",
"The Approach",
"The Approach ::: An Overview",
"The Approach ::: Hierarchical Record Encoder",
"The Approach ::: Interactive Attention",
"The Approach ::: Decoder",
"The Approach ::: Back-translation Objective",
"Experiments",
"Experiments ::: Implementation Details and Evaluation Metrics",
"Experiments ::: Comparison on Document-level Text Manipulation",
"Experiments ::: Human Evaluation",
"Experiments ::: Comparison on Sentence-level Text Manipulation",
"Experiments ::: Qualitative Example",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"2f7b408b4ff5891eca689de3a7e2a7df6c378a82",
"e7cfdc7ee52b7fe5c5501f2f45bd11541cda9aa0"
],
"answer": [
{
"evidence": [
"Document-level text manipulation experimental results are given in Table 2. The first block shows two slot filling methods, which can reach the maximum BLEU (100) after masking out record tokens. It is because that both methods only replace records without modifying other parts of the reference text. Moreover, Copy-SF achieves reasonably good performance on multiple metrics, setting a strong baseline for content fidelity and content selection. For two data-to-text generation methods CCDT and HEDT, the latter one is consistently better than the former, which verifies the proposed hierarchical record encoder is more powerful. However, their Style BLEU scores are particularly low, which demonstrates that direct supervised learning is incapable of controlling the text expression. In comparison, our proposed models achieve better Style BLEU and Content Selection F%. The superior performance of our full model compared to the variant ours-w/o-InterAtt, TMTE and Coatt demonstrates the usefulness of the interactive attention mechanism.",
"FLOAT SELECTED: Table 2: Document-level comparison results.",
"In this section, we hired three graduates who passed intermediate English test (College English Test Band 6) and were familiar with NBA games to perform human evaluation. Following BIBREF1, BIBREF26, we presented to annotators five generated summaries, one from our model and four others from comparison methods, such as Rule-SF, Copy-SF, HEDT, TMTE. These students were asked to rank the five summaries by considering “Content Fidelity”, “Style Preservation” and “Fluency” separately. The rank of each aspect ranged from 1 to 5 with the higher score the better and the ranking scores are averaged as the final score. For each study, we evaluated on 50 test instances. From Table 3, we can see that the Content Fidelity and Style Preservation results are highly consistent with the results of the objective evaluation. An exception is that the Fluency of our model is much higher than other methods. One possible reason is that the reference-based generation method is more flexible than template-based methods, and more stable than pure language models on document-level long text generation tasks.",
"FLOAT SELECTED: Table 3: Human Evaluation Results.",
"To demonstrate the effectiveness of our models on sentence-level text manipulation, we show the results in Table 4. We can see that our full model can still get consistent improvements on sentence-level task over previous state-of-the-art method. Specifically, we observe that interactive attention and back-translation cannot bring a significant gain. This is partially because the input reference and records are relatively simple, which means that they do not require overly complex models for representation learning.",
"FLOAT SELECTED: Table 4: Sentence-level comparison results."
],
"extractive_spans": [],
"free_form_answer": "For Document- level comparison, the model achieves highest CS precision and F1 score and it achieves higher BLEU score that TMTE, Coatt, CCDT, and HEDT. \nIn terms of Human Evaluation, the model had the highest average score, the highest Fluency score, and the second highest Content Fidelity. \nIn terms of Sentence-level comparison the model had the highest Recall and F1 scores for Content Fidelity.",
"highlighted_evidence": [
"Document-level text manipulation experimental results are given in Table 2. The first block shows two slot filling methods, which can reach the maximum BLEU (100) after masking out record tokens. It is because that both methods only replace records without modifying other parts of the reference text. Moreover, Copy-SF achieves reasonably good performance on multiple metrics, setting a strong baseline for content fidelity and content selection. For two data-to-text generation methods CCDT and HEDT, the latter one is consistently better than the former, which verifies the proposed hierarchical record encoder is more powerful. However, their Style BLEU scores are particularly low, which demonstrates that direct supervised learning is incapable of controlling the text expression. In comparison, our proposed models achieve better Style BLEU and Content Selection F%. The superior performance of our full model compared to the variant ours-w/o-InterAtt, TMTE and Coatt demonstrates the usefulness of the interactive attention mechanism.",
"FLOAT SELECTED: Table 2: Document-level comparison results.",
"In this section, we hired three graduates who passed intermediate English test (College English Test Band 6) and were familiar with NBA games to perform human evaluation.",
"From Table 3, we can see that the Content Fidelity and Style Preservation results are highly consistent with the results of the objective evaluation. An exception is that the Fluency of our model is much higher than other methods. One possible reason is that the reference-based generation method is more flexible than template-based methods, and more stable than pure language models on document-level long text generation tasks.",
"FLOAT SELECTED: Table 3: Human Evaluation Results.",
"To demonstrate the effectiveness of our models on sentence-level text manipulation, we show the results in Table 4. We can see that our full model can still get consistent improvements on sentence-level task over previous state-of-the-art method. ",
"FLOAT SELECTED: Table 4: Sentence-level comparison results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"148976f4c74f0e1da426123b3964a44998f83659",
"2e2769dcc2e1ad02fbb84ecf0452bf70cb03cfcb"
],
"answer": [
{
"evidence": [
"We use the same evaluation metrics employed in BIBREF1. Content Fidelity (CF) is an information extraction (IE) approach used in BIBREF10 to measure model's ability to generate text containing factual records. That is, precision and recall (or number) of unique records extracted from the generated text $z$ via an IE model also appear in source recordset $x$. Style Preservation is used to measure how many stylistic properties of the reference are retained in the generated text. In this paper, we calculate BLEU score between the generated text and the reference to reflect model's ability on style preservation. Furthermore, in order to measure model's ability on content selection, we adopt another IE-based evaluation metric, named Content selection, (CS), which is used for data-to-text generation BIBREF10. It is measured in terms of precision and recall by comparing records in generated text $z$ with records in the auxiliary reference $y_{aux}$."
],
"extractive_spans": [
"Content Fidelity (CF) ",
"Content selection, (CS)",
"BLEU "
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the same evaluation metrics employed in BIBREF1. Content Fidelity (CF) is an information extraction (IE) approach used in BIBREF10 to measure model's ability to generate text containing factual records. That is, precision and recall (or number) of unique records extracted from the generated text $z$ via an IE model also appear in source recordset $x$. Style Preservation is used to measure how many stylistic properties of the reference are retained in the generated text. In this paper, we calculate BLEU score between the generated text and the reference to reflect model's ability on style preservation. Furthermore, in order to measure model's ability on content selection, we adopt another IE-based evaluation metric, named Content selection, (CS), which is used for data-to-text generation BIBREF10. It is measured in terms of precision and recall by comparing records in generated text $z$ with records in the auxiliary reference $y_{aux}$."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the same evaluation metrics employed in BIBREF1. Content Fidelity (CF) is an information extraction (IE) approach used in BIBREF10 to measure model's ability to generate text containing factual records. That is, precision and recall (or number) of unique records extracted from the generated text $z$ via an IE model also appear in source recordset $x$. Style Preservation is used to measure how many stylistic properties of the reference are retained in the generated text. In this paper, we calculate BLEU score between the generated text and the reference to reflect model's ability on style preservation. Furthermore, in order to measure model's ability on content selection, we adopt another IE-based evaluation metric, named Content selection, (CS), which is used for data-to-text generation BIBREF10. It is measured in terms of precision and recall by comparing records in generated text $z$ with records in the auxiliary reference $y_{aux}$."
],
"extractive_spans": [
"Content Fidelity (CF)",
"Style Preservation",
"BLEU score",
"Content selection"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the same evaluation metrics employed in BIBREF1. Content Fidelity (CF) is an information extraction (IE) approach used in BIBREF10 to measure model's ability to generate text containing factual records. That is, precision and recall (or number) of unique records extracted from the generated text $z$ via an IE model also appear in source recordset $x$. Style Preservation is used to measure how many stylistic properties of the reference are retained in the generated text. In this paper, we calculate BLEU score between the generated text and the reference to reflect model's ability on style preservation. Furthermore, in order to measure model's ability on content selection, we adopt another IE-based evaluation metric, named Content selection, (CS), which is used for data-to-text generation BIBREF10."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1cfdf3a5f26d07a65683d3943fd82e6fe9043ec5",
"576a82710aa5d40d2f041e0e9db4ccb8ad656c78"
],
"answer": [
{
"evidence": [
"We compare with the following baseline methods on the document-level text manipulation.",
"(1) Rule-based Slot Filling Method (Rule-SF) is a straightforward way for text manipulation. Firstly, It masks the record information $x^{\\prime }$ in the $y^{\\prime }$ and build a mapping between $x$ and $x^{\\prime }$ through their data types. Afterwards, select the suitable records from $x$ to fill in the reference y with masked slots. The method is also used in sentence-level task BIBREF1.",
"(2) Copy-based Slot Filling Method (Copy-SF) is a data-driven slot filling method. It is derived from BIBREF21, which first generates a template text with data slots to be filled and then leverages a delayed copy mechanism to fill in the slots with proper data records.",
"(3) Conditional Copy based Data-To-Text (CCDT) is a classical neural model for data-to-text generation BIBREF10. (4) Hierarchical Encoder for Data-To-Text (HEDT) is also a data-to-text method, which adopts the same hierarchical encoder in our model.",
"(5) Text Manipulation with Table Encoder (TMTE) extends sentence-level text editing method BIBREF1 by equipping a more powerful hierarchical table encoder.",
"(6) Co-attention-based Method (Coatt): a variation of our model by replacing interactive attention with another co-attention model BIBREF22.",
"(7) Ours w/o Interactive Attention (-InterAtt) is our model without interactive attention.",
"(8) Ours w/o Back-translation (-BackT) is also a variation of our model by omitting back-translation loss.",
"In addition, for sentence-level task, we adopt the same baseline methods as the paper BIBREF1, including an attention-based Seq2Seq method with copy mechanism BIBREF23, a rule-based method, two style transfer methods, MAST BIBREF24 and AdvST BIBREF25, as well as their state-of-the-art method, abbreviate as S-SOTA."
],
"extractive_spans": [
"Rule-based Slot Filling Method (Rule-SF)",
"Copy-based Slot Filling Method (Copy-SF)",
"Conditional Copy based Data-To-Text (CCDT)",
"Hierarchical Encoder for Data-To-Text (HEDT)",
"Text Manipulation with Table Encoder (TMTE)",
"Co-attention-based Method (Coatt)",
"attention-based Seq2Seq method with copy mechanism",
"rule-based method",
"MAST",
"AdvST",
"S-SOTA"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare with the following baseline methods on the document-level text manipulation.\n\n(1) Rule-based Slot Filling Method (Rule-SF) is a straightforward way for text manipulation. Firstly, It masks the record information $x^{\\prime }$ in the $y^{\\prime }$ and build a mapping between $x$ and $x^{\\prime }$ through their data types. Afterwards, select the suitable records from $x$ to fill in the reference y with masked slots. The method is also used in sentence-level task BIBREF1.\n\n(2) Copy-based Slot Filling Method (Copy-SF) is a data-driven slot filling method. It is derived from BIBREF21, which first generates a template text with data slots to be filled and then leverages a delayed copy mechanism to fill in the slots with proper data records.\n\n(3) Conditional Copy based Data-To-Text (CCDT) is a classical neural model for data-to-text generation BIBREF10. (4) Hierarchical Encoder for Data-To-Text (HEDT) is also a data-to-text method, which adopts the same hierarchical encoder in our model.\n\n(5) Text Manipulation with Table Encoder (TMTE) extends sentence-level text editing method BIBREF1 by equipping a more powerful hierarchical table encoder.\n\n(6) Co-attention-based Method (Coatt): a variation of our model by replacing interactive attention with another co-attention model BIBREF22.\n\n(7) Ours w/o Interactive Attention (-InterAtt) is our model without interactive attention.\n\n(8) Ours w/o Back-translation (-BackT) is also a variation of our model by omitting back-translation loss.\n\nIn addition, for sentence-level task, we adopt the same baseline methods as the paper BIBREF1, including an attention-based Seq2Seq method with copy mechanism BIBREF23, a rule-based method, two style transfer methods, MAST BIBREF24 and AdvST BIBREF25, as well as their state-of-the-art method, abbreviate as S-SOTA."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare with the following baseline methods on the document-level text manipulation.",
"(1) Rule-based Slot Filling Method (Rule-SF) is a straightforward way for text manipulation. Firstly, It masks the record information $x^{\\prime }$ in the $y^{\\prime }$ and build a mapping between $x$ and $x^{\\prime }$ through their data types. Afterwards, select the suitable records from $x$ to fill in the reference y with masked slots. The method is also used in sentence-level task BIBREF1.",
"(2) Copy-based Slot Filling Method (Copy-SF) is a data-driven slot filling method. It is derived from BIBREF21, which first generates a template text with data slots to be filled and then leverages a delayed copy mechanism to fill in the slots with proper data records.",
"(3) Conditional Copy based Data-To-Text (CCDT) is a classical neural model for data-to-text generation BIBREF10. (4) Hierarchical Encoder for Data-To-Text (HEDT) is also a data-to-text method, which adopts the same hierarchical encoder in our model.",
"(5) Text Manipulation with Table Encoder (TMTE) extends sentence-level text editing method BIBREF1 by equipping a more powerful hierarchical table encoder.",
"(6) Co-attention-based Method (Coatt): a variation of our model by replacing interactive attention with another co-attention model BIBREF22.",
"(7) Ours w/o Interactive Attention (-InterAtt) is our model without interactive attention.",
"(8) Ours w/o Back-translation (-BackT) is also a variation of our model by omitting back-translation loss.",
"In addition, for sentence-level task, we adopt the same baseline methods as the paper BIBREF1, including an attention-based Seq2Seq method with copy mechanism BIBREF23, a rule-based method, two style transfer methods, MAST BIBREF24 and AdvST BIBREF25, as well as their state-of-the-art method, abbreviate as S-SOTA."
],
"extractive_spans": [
" Rule-based Slot Filling Method (Rule-SF)",
"Copy-based Slot Filling Method (Copy-SF) ",
"Conditional Copy based Data-To-Text (CCDT)",
"Data-To-Text (HEDT) ",
"Table Encoder (TMTE)",
" Co-attention-based Method (Coatt)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare with the following baseline methods on the document-level text manipulation.\n\n(1) Rule-based Slot Filling Method (Rule-SF) is a straightforward way for text manipulation. Firstly, It masks the record information $x^{\\prime }$ in the $y^{\\prime }$ and build a mapping between $x$ and $x^{\\prime }$ through their data types. Afterwards, select the suitable records from $x$ to fill in the reference y with masked slots. The method is also used in sentence-level task BIBREF1.\n\n(2) Copy-based Slot Filling Method (Copy-SF) is a data-driven slot filling method. It is derived from BIBREF21, which first generates a template text with data slots to be filled and then leverages a delayed copy mechanism to fill in the slots with proper data records.\n\n(3) Conditional Copy based Data-To-Text (CCDT) is a classical neural model for data-to-text generation BIBREF10. (4) Hierarchical Encoder for Data-To-Text (HEDT) is also a data-to-text method, which adopts the same hierarchical encoder in our model.\n\n(5) Text Manipulation with Table Encoder (TMTE) extends sentence-level text editing method BIBREF1 by equipping a more powerful hierarchical table encoder.\n\n(6) Co-attention-based Method (Coatt): a variation of our model by replacing interactive attention with another co-attention model BIBREF22.\n\n(7) Ours w/o Interactive Attention (-InterAtt) is our model without interactive attention.\n\n(8) Ours w/o Back-translation (-BackT) is also a variation of our model by omitting back-translation loss.\n\nIn addition, for sentence-level task, we adopt the same baseline methods as the paper BIBREF1, including an attention-based Seq2Seq method with copy mechanism BIBREF23, a rule-based method, two style transfer methods, MAST BIBREF24 and AdvST BIBREF25, as well as their state-of-the-art method, abbreviate as S-SOTA."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"6070e8d77f24e06b8d4a1a062fbe58e17953d813",
"ddc6792407f620ff168337629cc85ab3ad56b94b"
],
"answer": [
{
"evidence": [
"In this subsection, we construct a large document-scale text content manipulation dataset as a testbed of our task. The dataset is derived from an NBA game report corpus ROTOWIRE BIBREF10, which consists of 4,821 human written NBA basketball game summaries aligned with their corresponding game tables. In our work, each of the original table-summary pair is treated as a pair of $(x, y_{aux})$, as described in previous subsection. To this end, we design a type-based method for obtaining a suitable reference summary $y^{\\prime }$ via retrieving another table-summary from the training data using $x$ and $y_{aux}$. The retrieved $y^{\\prime }$ contains record types as same as possible with record types contained in $y$. We use an existing information extraction tool BIBREF10 to extract record types from the reference text. Table TABREF3 shows the statistics of constructed document-level dataset and a sentence-level benchmark dataset BIBREF1. We can see that the proposed document-level text manipulation problem is more difficult than sentence-level, both in terms of the complexity of input records and the length of generated text.",
"In this section, we describe experiment settings and report the experiment results and analysis. We apply our neural models for text manipulation on both document-level and sentence-level datasets, which are detailed in Table 1.",
"FLOAT SELECTED: Table 1: Document-level/Sentence-level Data Statistics."
],
"extractive_spans": [],
"free_form_answer": "Document-level dataset has total of 4821 instances. \nSentence-level dataset has total of 45583 instances. ",
"highlighted_evidence": [
"We use an existing information extraction tool BIBREF10 to extract record types from the reference text. Table TABREF3 shows the statistics of constructed document-level dataset and a sentence-level benchmark dataset BIBREF1. We can see that the proposed document-level text manipulation problem is more difficult than sentence-level, both in terms of the complexity of input records and the length of generated text.",
" We apply our neural models for text manipulation on both document-level and sentence-level datasets, which are detailed in Table 1.",
"FLOAT SELECTED: Table 1: Document-level/Sentence-level Data Statistics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this subsection, we construct a large document-scale text content manipulation dataset as a testbed of our task. The dataset is derived from an NBA game report corpus ROTOWIRE BIBREF10, which consists of 4,821 human written NBA basketball game summaries aligned with their corresponding game tables. In our work, each of the original table-summary pair is treated as a pair of $(x, y_{aux})$, as described in previous subsection. To this end, we design a type-based method for obtaining a suitable reference summary $y^{\\prime }$ via retrieving another table-summary from the training data using $x$ and $y_{aux}$. The retrieved $y^{\\prime }$ contains record types as same as possible with record types contained in $y$. We use an existing information extraction tool BIBREF10 to extract record types from the reference text. Table TABREF3 shows the statistics of constructed document-level dataset and a sentence-level benchmark dataset BIBREF1. We can see that the proposed document-level text manipulation problem is more difficult than sentence-level, both in terms of the complexity of input records and the length of generated text.",
"FLOAT SELECTED: Table 1: Document-level/Sentence-level Data Statistics."
],
"extractive_spans": [],
"free_form_answer": "Total number of documents is 4821. Total number of sentences is 47583.",
"highlighted_evidence": [
"TABREF3 shows the statistics of constructed document-level dataset and a sentence-level benchmark dataset BIBREF1.",
"FLOAT SELECTED: Table 1: Document-level/Sentence-level Data Statistics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How better are results of new model compared to competitive methods?",
"What is the metrics used for benchmarking methods?",
"What are other competitive methods?",
"What is the size of built dataset?"
],
"question_id": [
"d576af4321fe71ced9e521df1f3fe1eb90d2df2d",
"fd651d19046966ca65d4bcf6f6ae9c66cdf13777",
"08b77c52676167af72581079adf1ca2b994ce251",
"89fa14a04008c93907fa13375f9e70b655d96209"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An example input (Table and Reference Summary) of document-level text content manipulation and its desired output. Text portions that fulfill the writing style are highlight in orange.",
"Table 1: Document-level/Sentence-level Data Statistics.",
"Figure 2: An overview of the document-level approach.",
"Figure 3: The architecture of our proposed model.",
"Table 2: Document-level comparison results.",
"Table 4: Sentence-level comparison results.",
"Table 3: Human Evaluation Results.",
"Figure 4: Examples of model output for HEDT, Rule-SF and Our full model on document-scale dataset. Red words or numbers are fidelity errors. Orange words are logical errors. Text portions in the reference summary and the document-scale outputs of different generation model that fulfill the stylistic characteristics are highlighted in blue."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"6-Table2-1.png",
"6-Table4-1.png",
"6-Table3-1.png",
"7-Figure4-1.png"
]
} | [
"How better are results of new model compared to competitive methods?",
"What is the size of built dataset?"
] | [
[
"2002.10210-Experiments ::: Human Evaluation-0",
"2002.10210-6-Table4-1.png",
"2002.10210-Experiments ::: Comparison on Document-level Text Manipulation-0",
"2002.10210-6-Table3-1.png",
"2002.10210-6-Table2-1.png",
"2002.10210-Experiments ::: Comparison on Sentence-level Text Manipulation-0"
],
[
"2002.10210-Experiments-0",
"2002.10210-Preliminaries ::: Document-scale Data Collection-0",
"2002.10210-2-Table1-1.png"
]
] | [
"For Document- level comparison, the model achieves highest CS precision and F1 score and it achieves higher BLEU score that TMTE, Coatt, CCDT, and HEDT. \nIn terms of Human Evaluation, the model had the highest average score, the highest Fluency score, and the second highest Content Fidelity. \nIn terms of Sentence-level comparison the model had the highest Recall and F1 scores for Content Fidelity.",
"Total number of documents is 4821. Total number of sentences is 47583."
] | 332 |
1909.11706 | The Power of Communities: A Text Classification Model with Automated Labeling Process Using Network Community Detection | The text classification is one of the most critical areas in machine learning and artificial intelligence research. It has been actively adopted in many business applications such as conversational intelligence systems, news articles categorizations, sentiment analysis, emotion detection systems, and many other recommendation systems in our daily life. One of the problems in supervised text classification models is that the models performance depend heavily on the quality of data labeling that are typically done by humans. In this study, we propose a new network community detection-based approach to automatically label and classify text data into multiclass value spaces. Specifically, we build a network with sentences as the network nodes and pairwise cosine similarities between TFIDF vector representations of the sentences as the network link weights. We use the Louvain method to detect the communities in the sentence network. We train and test Support vector machine and Random forest models on both the human labeled data and network community detection labeled data. Results showed that models with the data labeled by network community detection outperformed the models with the human-labeled data by 2.68-3.75% of classification accuracy. Our method may help development of a more accurate conversational intelligence system and other text classification systems. | {
"paragraphs": [
[
"Text data is a great source of knowledge for building many useful recommendation systems, search engines as well as conversational intelligence systems. However, it is often found to be a difficult and time consuming task to structure the unstructured text data especially when it comes to labeling the text data for training text classification models. Data labeling, typically done by humans, is prone to make misslabeled data entries, and hard to track whether the data is correctly labeled or not. This human labeling practice indeed impacts on the quality of the trained models in solving classificastion problems.",
"Some previous studies attempted to solve this problem by utilizing unsupervised BIBREF3, BIBREF4 and semisupervised BIBREF5 machine learning models. However, those studies used pre-defined keywords list for each category in the document, which provides the models with extra referencial materials to look at when making the classification predictions, or included already labeled data as a part of the entire data set from which the models learn. In case of using clustering algorithms such as K$-$means BIBREF4, since the features selected for each class depend on the frequency of the particular words in the sentences, when there are words that appear in multiple sentences frequently, it is very much possible that those words can be used as features for multiple classes leading the model to render more ambiguity, and to result a poor performance in classifying documents.",
"Although there are many studies in text classification problems using machine learning techniques, there has been a limited number of studies conducted in text classifications utilizing networks science. Network science is actively being adopted in studying biological networks, social networks, financial market prediction BIBREF6 and more in many fields of study to mine insights from the collectively inter-connected components by analysing their relationships and structural characteristics. Only a few studies adopted network science theories to study text classifications, and showed preliminary results of the text clustering performed by network analysis specially with network community detection algorithms BIBREF7, BIBREF8. However, those studies did not clearly show the quality of community detection algorithms and other possible usefull features. Network community detection BIBREF9 is graph clustering methods actively used in complex networks analysis From large social networks analysis BIBREF10 to RNA-sequencing analysis BIBREF11 as a tool to partition a graph data into multiple parts based on the network's structural properties such as betweeness, modularity, etc.",
"In this paper, we study further to show the usefulness of the network community detection on labeling unlabeled text data that will automate and improve human labeling tasks, and on training machine learning classification models for a particular text classification problem. We finally show that the machine learning models trained on the data labeled by the network community detection model outperform the models trained on the human labeled data."
],
[
"We propose a new approach of building text classification models using a network community detection algorithm with unlabeled text data, and show that the network community detection is indeed useful in labeling text data by clustering the text data into multiple distinctive groups, and also in improving the classification accuracy. This study follows below steps (see Figure.FIGREF7), and uses Python packages such as NLTK, NetworkX and SKlearn.",
"Gathered a set of text data that was used to develop a particular conversational intelligence(chatbot) system from an artificial intelligence company, Pypestream. The data contains over 2,000 sentences of user expressions on that particular chatbot service such as [\"is there any parking space?\", \"what movies are playing?\", \"how can I get there if I'm taking a subway?\"]",
"Tokenizing and cleaning the sentences by removing punctuations, special characters and English stopwords that appear frequently without holding much important meaning. For example, [\"how can I get there if I'm taking a subway?\"] becomes ['get', 'taking', 'subway']",
"Stemmizing the words, and adding synonyms and bigrams of the sequence of the words left in each sentence to enable the model to learn more kinds of similar expressions and the sequences of the words. For example, ['get', 'taking', 'subway'] becomes ['get', 'take', 'subway', 'tube', 'underground', 'metro', 'take metro', 'get take', 'take subway', 'take underground', ...]",
"Transforming the preprocessed text data into a vector form by computing TFIDF of each preprocessed sentence with regard to the entire data set, and computing pair-wise cosine similiarity of the TFIDF vectors to form the adjacency matrix of the sentence network",
"Constructing the sentence network using the adjacency matrix with each preprocessed sentence as a network node and the cosine similarity of TFIDF representations between every node pair as the link weight.",
"Applying a network community detection algorithm on the sentence network to detect the communities where each preprocessed sentence belong, and build a labeled data set with detected communities for training and testing machine learning classification models."
],
[
"The data set obtained from Pypestream is permitted to be used for the research purpose only, and for a security reason, we are not allowed to share the data set. It was once originaly used for creating a conversational intelligence system(chatbot) to support customer inqueries about a particular service. The data set is a two-column comma separated value format data with one column of \"sentence\" and the other column of \"class\". It contains 2,212 unique sentences of user expressions asking questions and aswering to the questions the chatbot asked to the users(see Table.TABREF9). The sentences are all in English without having any missspelled words, and labeled with 19 distinct classes that are identified and designed by humans. Additional data set that only contains the sentences was made for the purpose of this study by taking out the \"class\" column from the original data set.",
"From each sentence, we removed punctuations, special characters and English stopwords to keep only those meaningful words that serve the main purpose of the sentence, and to avoid any redundant computing. We then tokenized each sentence into words to process the data further in word level. For words in each sentence, we added synonyms of the words to handle more variations of the sentence as a typical method of increasing the resulting classification models' capability of understanding more unseen expressions with different words that describe similar meanings. Although we used the predefined synonyms from the Python NLTK package, one might develop it's own synonym data to use in accordance with the context of the particular data to achieve a better accuracy. We also added bigrams of the words to deal with those cases where the tokenization breaks the meaning of the word that consist of two words. For example, if we tokenized the sentence \"go to binghamton university\" and process the further steps without adding bigrams of them, the model is likely to yield a lower confidence on classifying unseen sentences with \"binghamton university\", or does not understand \"binghamton university\" at all since the meaning of \"binghamton university\" is lost in the data set BIBREF12.",
"With the preprocessed text data, we built vector representations of the sentences by performing weighted document representation using TFIDF weighting scheme BIBREF13, BIBREF14. TFIDF, as known as Term frequency inversed document frequency, is a document representation that takes account of the importance of each word by its frequency in the whole set of documents and its frequency in particular sets of documents. Specifically, let $D = \\lbrace d_1, \\dots , d_n\\rbrace $ be a set of documents and $T = \\lbrace t_1, \\dots , t_m\\rbrace $ the set of unique terms in the entire documents where $n$ is the number of documents in the data set and $m$ the number of unique words in the documents. In this study, the documents are the preprocessed sentences and the terms are the unique words in the preprocessed sentences. The importance of a word is captured with its frequency as $tf(d,t)$ denoting the frequency of the word $t \\in T$ in the document $d \\in D$. Then a document $d$ is represented as an $m$-dimensional vector ${{t_d}}=(tf(d,t_1),\\dots ,tf(d,t_m))$. However, In order to compute more concise and meaningful importance of a word, TFIDF not only takes the frequency of a particular word in a particular document into account, but also considers the number of documents that the word appears in the entire data set. The underlying thought of this is that a word appeared frequently in some groups of documents but rarely in the other documents is more important and relavant to the groups of documents. Applying this contcept, $tf(d,t)$ is weighted by the document frequency of a word, and $tf(d,t)$ becomes $tfidf(d,t) = tf(d,t)\\times log\\frac{|D|}{df(t)}$ where $df(t)$ is the number of documents the word $t$ appears, and thus the document $d$ is represented as ${{t_d}}=(tfidf(d,t_1),\\dots ,tfidf(d,t_m))$."
],
[
"With the TFIDF vector representations, we formed sentence networks to investigate the usefulness of the network community detection. In total, 10 sentence networks (see Figure.FIGREF13 and Figure.FIGREF16) were constructed with 2,212 nodes representing sentences and edge weights representing the pairwise similarities between sentences with 10 different network connectivity threshold values. The networks we formed were all undirected and weighted graphs. Particularly, as for the network edge weights, the cosine similarity BIBREF14, BIBREF15 is used to compute the similarities between sentences. The cosine similarity is a similarity measure that is in a floating number between 0 and 1, and computed as the angle difference between two vectors. A cosine similarity of 0 means that the two vectors are perpendicular to each other implying no similarity, on the other hand a cosine similarity of 1 means that the two vectors are identical. It is popularly used in text mining and information retrieval techniques. In our study, the cosine similarity between two sentences $i$ and $j$ is defined as below equation.",
"where:",
"${t_{d_i}} = (tfidf(d_i,t_1),\\dots ,tfidf(d_i,t_m))$, $the$ $TFIDF$ $vector$ $of$ $i$-$th$ $sentence$",
"${t_{d_j}} = (tfidf(d_j,t_1),\\dots ,tfidf(d_j,t_m))$, $the$ $TFIDF$ $vector$ $of$ $j$-$th$ $sentence$",
"$d$ $=$ $a$ $preprocessed$ $sentence$ $in$ $the$ $data$ $set$",
"$t$ $=$ $a$ $unique$ $word$ $appeared$ $in$ $the$ $preprocessed$ $data$ $set$",
"To build our sentence networks, we formed a network adjacency matrix for 2,212 sentences, $M$, with the pairwise cosine similarities of TFIDF vector representations computed in the above step."
],
[
"The particular algorithm of network community detection used in this study is Louvain method BIBREF2 which partitions a network into the number of nodes - every node is its own comunity, and from there, clusters the nodes in a way to maximize each cluster's modularity which indicates how strong is the connectivity between the nodes in the community. This means that, based on the cosine similarity scores - the networks edge weights, the algorithm clusters similar sentences together in a same community while the algorithm proceeds maximizing the connectivity strength amongst the nodes in each community. The network constructed with no threshold in place was detected to have 18 distinct communities with three single node communities. Based on the visualized network (see Figure.FIGREF13), it seemed that the network community detection method clustered the sentence network as good as the original data set with human labeled classes although the communities do not look quite distinct. However, based on the fact that it had three single node communities and the number of distinct communities is less than the number of classes in the human labeled data set, we suspected possible problems that would degrade the quality of the community detection for the use of training text classification models."
],
[
"We checked the community detection results with the original human labeled data by comparing the sentences in each community with the sentences in each human labeled class to confirm how well the algorithm worked. We built class maps to facilitate this process (see Figure.FIGREF15) that show mapping between communities in the sentence networks and classes in the original data set. Using the class maps, we found two notable cases where; 1. the sentences from multiple communities are consist of the sentences of one class of the human labeled data, meaning the original class is splitted into multiple communities and 2. the sentences from one community consist of the sentences of multiple classes in human labeled data, meaning multiple classes in the original data are merged into one community. For example, in the earlier case (see blue lines in Figure.FIGREF15) which we call Class-split, the sentences in COMMUNITY_1, COMMUNITY_2, COMMUNITY_5, COMMUNITY_8, COMMUNITY_10, COMMUNITY_14 and COMMUNITY_17 are the same as the sentences in CHAT_AGENT class. Also, in the later case (see red lines in Figure.FIGREF15) which we call Class-merge, the sentences in COMMUNITY_7 are the same as the sentences in GETINFO_PARKING, GETINFO_NEARBY_RESTAURANT, GETINFO_TOUR, GETINFO_EXACT_ADDRESS, STARTOVER, ORDER_EVENTS, GETINFO_JOB, GETINFO, GETINFO_DRESSCODE, GETINFO_LOST_FOUND as well as GETINFO_FREE_PERFORMANCE.",
"The Class-split happens when a human labeled class is devided into multiple communities as the sentence network is clustered based on the semantic similarity. This actually can help improve the text classification based systems to work more sophisticatedly as the data set gets more detailed subclasses to design the systems with. Although, it is indeed a helpful phenomena, we would like to minimize the number of subclasses created by the community detection algorithm simply because we want to avoid having too many subclasses that would add more complexity in designing any applications using the community data. On the other hand, the Class-merge happens when multiple human labeled classes are merged into one giant community. This Class-merge phenomena also helps improve the original data set by detecting either misslabeled or ambiguous data entries. We will discuss more details in the following subsection. Nonetheless, we also want to minimize the number of classes merged into the one giant community, because when too many classes are merged into one class, it simply implies that the sentence network is not correctly clustered. For example, as shown in Figure.FIGREF15 red lines, 12 different human labeled classes that do not share any similar intents are merged into COMMUNITY_7. If we trained a text classification model on this data, we would have lost the specifically designed purposes of the 12 different classes, expecting COMMUNITY_7 to deal with all the 12 different types of sentences. This would dramatically degrade the performance of the text classification models.",
"In order to quantify the degree of Class-split and Class-merge of a network, and to find out optimal connectivity threshold that would yield the sentence network with the best community detection quality, we built two metrics using the class map. We quantified the Class-split by counting the number of communities splitted out from each and every human labeled class, and the Class-merge by counting the number of human labeled classes that are merged into each and every community. We then averaged the Class-splits across all the human labeled classes and Class-merges across all the communities. For example, using the class map of the sentence network with no threshold, we can easily get the number of Class-split and Class-merge as below. By averaging them, we get the Class_split and Class_merge scores of the sentence network, which is 2.7368 and 2.8333 respectively.",
"We computed the normalized Class_split and Class_merge scores for all 10 sentence networks (see Figure.FIGREF17). Figure.FIGREF17 shows the normalized Class-split and Class-merge scores of the 10 sentence networks with different connectivity thresholds ranging from $0.0$ to $0.9$. With these series of Class_split and Class_merge scores, we found out that at 0.5477 of connectivity threshold we can get the sentence network that would give us the best quality of community detection result particularly for our purpose of training text classification models."
],
[
"Using the Class_merge information we got from the class map, we were able to spot out those sentences that are either misslabeled or ambiguous between classes in the original data set. This is extreamly helpful and convenient feature in fixing and improving text data for classification problems, because data fixing is normally a tedious and time consuming tasks which takes a great amount of human labor. For example, by looking at the class map, in our sentence network with no threshold, COMMUNITY_5 contains sentences appeared in GETINFO_EXACT_ADDRESS and CHAT_AGENT classes. We investigated the sentences in COMMUNITY_5, and were able to spot out one sentence ['I need to address a human being!'] which is very ambiguous for machines to classify between the two classes. This sentence is originally designed for CHAT_AGENT class, but because of its ambiguous expression with the word 'address', it is together with sentences in GETINFO_EXACT_ADDRESS in COMMUNITY_5. After fixing the ambiguity of that sentence by correcting it to ['I need to talk to a human being!'], we easily improved the original data set."
],
[
"Once we got the optimal connectivity threshold using the Class_split and Class_merge scores as shown above sections, we built the sentence network with the optimal threshold of 0.5477. We then applied the Louvain method to detect communities in the network, and to automatically label the data set. The network with threshold of 0.5477 has 399 communities with 20,856 edges. Class_split and Class_merge scores of the network was 22.3158 and 1.0627 respectively. We finally trained and tested machine learning based text classification models on the data set labeled by the community detection outcome to see how well our approach worked. Following a general machine learning train and test practice, we split the data set into train set(80% of the data) and test set(20% of the data). The particular models we trained and tested were standard Support vector machine BIBREF16 and Randome forest BIBREF17 models that are popularly used in natural language processing such as spam e-mail and news article categorizations. More details about the two famous machine learning models are well discussed in the cited papers."
],
[
"Figure.FIGREF20 shows the accuracy of the four Support vector machine and Random forest models trained on the original human labeled data and on the data labeled by our method. The accuracies are hit ratios that compute the number of correctly classified sentences over the number of all sentences in the test data. For example, if a model classified 85 sentences correctly out of 100 test sentences, then the accuracy is 0.85. In order to accurately compute the Ground truth hit ratio, we used the ground truth messages in the chatbot. The messages are the sentences that are to be shown to the chatbot users in response to the classification for a particular user query as below.",
"For example, for a question of \"how do I get there by subway?\", in the chatbot, there is a designed message of \"You can take line M or B to 35th street.\" to respond to that particular query. Using these output messages in the chatbot, we were able to compute the ground truth accuracy of our classification models by comprehending the input sentences in the test sets, the detected classes from the models and linked messages. In our test, the Support vector machine trained on human labeled data performed 0.9572 while the same model trained on the data labeled by our method resulted 0.9931. Also, the Random forest model trained on human labeled data performed 0.9504 while the same model trained on the data labeled by our method did 0.9759."
],
[
"In this study, we demonstrated a new approach of training text classification models using the network community detection, and showed how the network community detection can help improve the models by automatically labeling text data and detecting misslabeled or ambiguous data points. As seen in this paper, we were able to yield better results in the accuracy of Support vector machine and Random forest models compared to the same models that were trained on the original human labeled data for the particular text classification problem. Our approach is not only useful in producing better classifiation models, but also in testing the quality of human made text data. One might be able to get even better results using this method by utilizing more sophisticatedly custom designed synonyms and stopwords, using more advanced natural language processing methods such as word-embeddings, utilizing higher n-grams such as trigrams, and using more balanced data sets. In the future, we would like to expand this study further to use the network itself to parse out classifications of unseen sentences without training machine learning models."
]
],
"section_name": [
"Introduction",
"Method",
"Method ::: Data, Preprocessing and Representation",
"Method ::: Sentence Network Construction",
"Method ::: Network Community Detection and Classification Models",
"Method ::: Network Community Detection and Classification Models ::: Quality of Network Community Detection Based Labeling",
"Method ::: Network Community Detection and Classification Models ::: Detecting Misslabeled or Ambiguous Sentences in Human-made Data Set",
"Method ::: Network Community Detection and Classification Models ::: Classification Models",
"Result",
"Discussions and Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"55fda8f9472d52e9ab8e38594d8884478dd17cec",
"6f7d45e25190499feba7c660801b384e76ea8bc7"
],
"answer": [
{
"evidence": [
"For example, for a question of \"how do I get there by subway?\", in the chatbot, there is a designed message of \"You can take line M or B to 35th street.\" to respond to that particular query. Using these output messages in the chatbot, we were able to compute the ground truth accuracy of our classification models by comprehending the input sentences in the test sets, the detected classes from the models and linked messages. In our test, the Support vector machine trained on human labeled data performed 0.9572 while the same model trained on the data labeled by our method resulted 0.9931. Also, the Random forest model trained on human labeled data performed 0.9504 while the same model trained on the data labeled by our method did 0.9759."
],
"extractive_spans": [],
"free_form_answer": "SVM",
"highlighted_evidence": [
"Support vector machine trained on human labeled data performed 0.9572 while the same model trained on the data labeled by our method resulted 0.9931. Also, the Random forest model trained on human labeled data performed 0.9504 while the same model trained on the data labeled by our method did 0.9759."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For example, for a question of \"how do I get there by subway?\", in the chatbot, there is a designed message of \"You can take line M or B to 35th street.\" to respond to that particular query. Using these output messages in the chatbot, we were able to compute the ground truth accuracy of our classification models by comprehending the input sentences in the test sets, the detected classes from the models and linked messages. In our test, the Support vector machine trained on human labeled data performed 0.9572 while the same model trained on the data labeled by our method resulted 0.9931. Also, the Random forest model trained on human labeled data performed 0.9504 while the same model trained on the data labeled by our method did 0.9759."
],
"extractive_spans": [],
"free_form_answer": "SVM",
"highlighted_evidence": [
"In our test, the Support vector machine trained on human labeled data performed 0.9572 while the same model trained on the data labeled by our method resulted 0.9931. Also, the Random forest model trained on human labeled data performed 0.9504 while the same model trained on the data labeled by our method did 0.9759."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"14b431e18f1e51ec794239599c263fd51853f6fb",
"7ce80714af058bc2a79c4edc8b01c7a443ea9165"
],
"answer": [
{
"evidence": [
"Gathered a set of text data that was used to develop a particular conversational intelligence(chatbot) system from an artificial intelligence company, Pypestream. The data contains over 2,000 sentences of user expressions on that particular chatbot service such as [\"is there any parking space?\", \"what movies are playing?\", \"how can I get there if I'm taking a subway?\"]"
],
"extractive_spans": [],
"free_form_answer": "Text data from Pypestream",
"highlighted_evidence": [
"Gathered a set of text data that was used to develop a particular conversational intelligence(chatbot) system from an artificial intelligence company, Pypestream."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The data set obtained from Pypestream is permitted to be used for the research purpose only, and for a security reason, we are not allowed to share the data set. It was once originaly used for creating a conversational intelligence system(chatbot) to support customer inqueries about a particular service. The data set is a two-column comma separated value format data with one column of \"sentence\" and the other column of \"class\". It contains 2,212 unique sentences of user expressions asking questions and aswering to the questions the chatbot asked to the users(see Table.TABREF9). The sentences are all in English without having any missspelled words, and labeled with 19 distinct classes that are identified and designed by humans. Additional data set that only contains the sentences was made for the purpose of this study by taking out the \"class\" column from the original data set."
],
"extractive_spans": [
"The data set obtained from Pypestream"
],
"free_form_answer": "",
"highlighted_evidence": [
"The data set obtained from Pypestream is permitted to be used for the research purpose only, and for a security reason, we are not allowed to share the data set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"29bdfa9815e3467ca104319e80b5a4b2788a81d5",
"5f8b91b68e741b60c357ffdac030282f620f7744"
],
"answer": [
{
"evidence": [
"The data set obtained from Pypestream is permitted to be used for the research purpose only, and for a security reason, we are not allowed to share the data set. It was once originaly used for creating a conversational intelligence system(chatbot) to support customer inqueries about a particular service. The data set is a two-column comma separated value format data with one column of \"sentence\" and the other column of \"class\". It contains 2,212 unique sentences of user expressions asking questions and aswering to the questions the chatbot asked to the users(see Table.TABREF9). The sentences are all in English without having any missspelled words, and labeled with 19 distinct classes that are identified and designed by humans. Additional data set that only contains the sentences was made for the purpose of this study by taking out the \"class\" column from the original data set."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The sentences are all in English without having any missspelled words, and labeled with 19 distinct classes that are identified and designed by humans."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Gathered a set of text data that was used to develop a particular conversational intelligence(chatbot) system from an artificial intelligence company, Pypestream. The data contains over 2,000 sentences of user expressions on that particular chatbot service such as [\"is there any parking space?\", \"what movies are playing?\", \"how can I get there if I'm taking a subway?\"]"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Gathered a set of text data that was used to develop a particular conversational intelligence(chatbot) system from an artificial intelligence company, Pypestream. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"6a08fcda2ae1797cb9fb1a8b2dd414a4a6c64f1e",
"8928db1f2ed89b813df6a2f1adb645ba030ead74"
],
"answer": [
{
"evidence": [
"The particular algorithm of network community detection used in this study is Louvain method BIBREF2 which partitions a network into the number of nodes - every node is its own comunity, and from there, clusters the nodes in a way to maximize each cluster's modularity which indicates how strong is the connectivity between the nodes in the community. This means that, based on the cosine similarity scores - the networks edge weights, the algorithm clusters similar sentences together in a same community while the algorithm proceeds maximizing the connectivity strength amongst the nodes in each community. The network constructed with no threshold in place was detected to have 18 distinct communities with three single node communities. Based on the visualized network (see Figure.FIGREF13), it seemed that the network community detection method clustered the sentence network as good as the original data set with human labeled classes although the communities do not look quite distinct. However, based on the fact that it had three single node communities and the number of distinct communities is less than the number of classes in the human labeled data set, we suspected possible problems that would degrade the quality of the community detection for the use of training text classification models."
],
"extractive_spans": [
"18 "
],
"free_form_answer": "",
"highlighted_evidence": [
"The network constructed with no threshold in place was detected to have 18 distinct communities with three single node communities. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The data set obtained from Pypestream is permitted to be used for the research purpose only, and for a security reason, we are not allowed to share the data set. It was once originaly used for creating a conversational intelligence system(chatbot) to support customer inqueries about a particular service. The data set is a two-column comma separated value format data with one column of \"sentence\" and the other column of \"class\". It contains 2,212 unique sentences of user expressions asking questions and aswering to the questions the chatbot asked to the users(see Table.TABREF9). The sentences are all in English without having any missspelled words, and labeled with 19 distinct classes that are identified and designed by humans. Additional data set that only contains the sentences was made for the purpose of this study by taking out the \"class\" column from the original data set."
],
"extractive_spans": [
"19 "
],
"free_form_answer": "",
"highlighted_evidence": [
" The sentences are all in English without having any missspelled words, and labeled with 19 distinct classes that are identified and designed by humans. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"which had better results, the svm or the random forest model?",
"which network community detection dataset was used?",
"did they collect the human labeled data?",
"how many classes are they classifying?"
],
"question_id": [
"ff36168caf48161db7039e3bd4732cef31d4de99",
"556782bb96f8fc07d14865f122362ebcc79134ec",
"cb58605a7c230043bd0d6e8d5b068f8b533f45fe",
"7969b8d80e12aa3ebb89b5622bc564f44e98329f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 1. Analysis process. a. preprocess the text data by removing punctuations, stopwords and special characters, and add synonyms and bigrams, b. transform the prepocessed sentence into TFIDF vector, and compute pair-wise cosine similairy between every sentence pair, c. construct the sentence networks, and apply Louvain method to detect communities of every sentence, d. label each sentence with the detected communities, e. train and test Support vector machine and Random forest models on the labeled data.",
"Table 1. The original text data contains 2,212 unique sentences labeled with 19 distinct classes assigned by humans. It is not a balanced data set which means that each class contains different number of sentences.",
"Fig. 2. A sentence network and its communities. The sentence network with no threshold on the node connectivity has 18 distinct communities including three single node communities.",
"Fig. 3. A class map between detected communities and human labeled classes. The class map shows a mapping(all lines) between communities detected by the Louvain method and their corresponding human labeled classes of the sentence network with no threshold. Some communities contain sentences that appeared in multiple human labeled classes(blue lines, we call this Class-split), and some human labeled classes contain sentences that appeared in multiple communities(red lines, we call this Class-merge).",
"Fig. 4. Nine sentence networks with different connectivity thresholds. Each node represents a sentence and an edge weight between two nodes represents the similarity between two sentences. In this study, we removed edges whose weight is below the threshold. a. network with threshold of 0.1 has 29 distinct communities with 11 signle node communities. b. network with threshold of 0.2 has 45 distinct communities with 20 single node communities, c. network with threshold of 0.3 has 100 distinct communities with 58 single node communities, d.network with threshold 0.4 has 187 distinct communities with 120 single node communities, e. network with threshold 0.5 has 320 distinct communities with 204 single node communities, f. network with threshold of 0.6 has 500 distinct communities with 335 single node communities, g. network with threshold of 0.7 has 719 distinct communities with 499 single node communities, h. network with threshold of 0.8 has 915 distinct communities with 658 single node communities, i. network with threshold of 0.9 has 1,140 distinct communities with 839 single node communities. Based on the visualized sentence networks, as the threshold gets larger it is shown that each network has more distinct communities and the detected communities look more distinct from each other with more single node communities.",
"Fig. 5. Optimal connectivity threshold point based on Class-split and Classmerge mertics The normalized Class-split score(blue line) is shown to increase as the threshold gets larger. On the other hand, normalized Class-merge(red line) decreases as the threshold gets larger. The optimal connectivity threshold is the point where both scores are minimized which is 0.5477.",
"Fig. 6. Accuracies of text classification models The red bars represent the accuracies of the Support vector machine(0.9572) and the Random forest(0.9504) model trained on the original human labeled data, while the blue bars represent the accuracies of the same models trained the data set labeled by the network community detection algorithm(0.9931 and 0.9759 respectively). It is shown that the models trained on the community data resulted higher accuracy in classifying the sentences in the test data."
],
"file": [
"4-Figure1-1.png",
"5-Table1-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png",
"10-Figure5-1.png",
"12-Figure6-1.png"
]
} | [
"which had better results, the svm or the random forest model?",
"which network community detection dataset was used?"
] | [
[
"1909.11706-Result-1"
],
[
"1909.11706-Method ::: Data, Preprocessing and Representation-0",
"1909.11706-Method-1"
]
] | [
"SVM",
"Text data from Pypestream"
] | 333 |
1707.07568 | CAp 2017 challenge: Twitter Named Entity Recognition | The paper describes the CAp 2017 challenge. The challenge concerns the problem of Named Entity Recognition (NER) for tweets written in French. We first present the data preparation steps we followed for constructing the dataset released in the framework of the challenge. We begin by demonstrating why NER for tweets is a challenging problem especially when the number of entities increases. We detail the annotation process and the necessary decisions we made. We provide statistics on the inter-annotator agreement, and we conclude the data description part with examples and statistics for the data. We, then, describe the participation in the challenge, where 8 teams participated, with a focus on the methods employed by the challenge participants and the scores achieved in terms of F$_1$ measure. Importantly, the constructed dataset comprising $\sim$6,000 tweets annotated for 13 types of entities, which to the best of our knowledge is the first such dataset in French, is publicly available at \url{http://cap2017.imag.fr/competition.html} . | {
"paragraphs": [
[
"The proliferation of the online social media has lately resulted in the democratization of online content sharing. Among other media, Twitter is very popular for research and application purposes due to its scale, representativeness and ease of public access to its content. However, tweets, that are short messages of up to 140 characters, pose several challenges to traditional Natural Language Processing (NLP) systems due to the creative use of characters and punctuation symbols, abbreviations ans slung language.",
"Named Entity Recognition (NER) is a fundamental step for most of the information extraction pipelines. Importantly, the terse and difficult text style of tweets presents serious challenges to NER systems, which are usually trained using more formal text sources such as newswire articles or Wikipedia entries that follow particular morpho-syntactic rules. As a result, off-the-self tools trained on such data perform poorly BIBREF0 . The problem becomes more intense as the number of entities to be identified increases, moving from the traditional setting of very few entities (persons, organization, time, location) to problems with more. Furthermore, most of the resources (e.g., software tools) and benchmarks for NER are for text written in English. As the multilingual content online increases, and English may not be anymore the lingua franca of the Web. Therefore, having resources and benchmarks in other languages is crucial for enabling information access worldwide.",
"In this paper, we propose a new benchmark for the problem of NER for tweets written in French. The tweets were collected using the publicly available Twitter API and annotated with 13 types of entities. The annotators were native speakers of French and had previous experience in the task of NER. Overall, the generated datasets consists of INLINEFORM0 tweets, split in training and test parts.",
"The paper is organized in two parts. In the first, we discuss the data preparation steps (collection, annotation) and we describe the proposed dataset. The dataset was first released in the framework of the CAp 2017 challenge, where 8 systems participated. Following, the second part of the paper presents an overview of baseline systems and the approaches employed by the systems that participated. We conclude with a discussion of the performance of Twitter NER systems and remarks for future work."
],
[
"In this section we describe the steps taken during the organisation of the challenge. We begin by introducing the general guidelines for participation and then proceed to the description of the dataset."
],
[
"The CAp 2017 challenge concerns the problem of NER for tweets written in French. A significant milestone while organizing the challenge was the creation of a suitable benchmark. While one may be able to find Twitter datasets for NER in English, to the best of our knowledge, this is the first resource for Twitter NER in French. Following this observation, our expectations for developing the novel benchmark are twofold: first, we hope that it will further stimulate the research efforts for French NER with a focus on in user-generated text social media. Second, as its size is comparable with datasets previously released for English NER we expect it to become a reference dataset for the community.",
"The task of NER decouples as follows: given a text span like a tweet, one needs to identify contiguous words within the span that correspond to entities. Given, for instance, a tweet “Les Parisiens supportent PSG ;-)” one needs to identify that the abbreviation “PSG” refers to an entity, namely the football team “Paris Saint-Germain”. Therefore, there two main challenges in the problem. First one needs to identify the boundaries of an entity (in the example PSG is a single word entity), and then to predict the type of the entity. In the CAp 2017 challenge one needs to identify among 13 types of entities: person, musicartist, organisation, geoloc, product, transportLine, media, sportsteam, event, tvshow, movie, facility, other in a given tweets. Importantly, we do not allow the entities to be hierarchical, that is contiguous words belong to an entity as a whole and a single entity type is associated per word. It is also to be noted that some of the tweets may not contain entities and therefore systems should not be biased towards predicting one or more entities for each tweet.",
"Lastly, in order to enable participants from various research domains to participate, we allowed the use of any external data or resources. On one hand, this choice would enable the participation of teams who would develop systems using the provided data or teams with previously developed systems capable of setting the state-of-the-art performance. On the other hand, our goal was to motivate approaches that would apply transfer learning or domain adaptation techniques on already existing systems to adapt them for the task of NER for French tweets."
],
[
"For the purposes of the CAp 2017 challenge we constructed a dataset for NER of French tweets. Overall, the dataset comprises 6,685 annotated tweets with the 13 types of entities presented in the previous section. The data were released in two parts: first, a training part was released for development purposes (dubbed “Training” hereafter). Then, to evaluate the performance of the developed systems a “Test” dataset was released that consists of 3,685 tweets. For compatibility with previous research, the data were released tokenized using the CoNLL format and the BIO encoding.",
"To collect the tweets that were used to construct the dataset we relied on the Twitter streaming API. The API makes available a part of Twitter flow and one may use particular keywords to filter the results. In order to collect tweets written in French and obtain a sample that would be unbiased towards particular types of entities we used common French words like articles, pronouns, and prepositions: “le”,“la”,“de”,“il”,“elle”, etc.. In total, we collected 10,000 unique tweets from September 1st until September the 15th of 2016.",
"Complementary to the collection of tweets using the Twitter API, we used 886 tweets provided by the “Société Nationale des Chemins de fer Français” (SNCF), that is the French National Railway Corporation. The latter subset is biased towards information in the interest of the corporation such as train lines or names of train stations. To account for the different distribution of entities in the tweets collected by SNCF we incorporated them in the data as follows:",
"For the training set, which comprises 3,000 tweets, we used 2,557 tweets collected using the API and 443 tweets of those provided by SNCF.",
"For the test set, which comprises 3,685 consists we used 3,242 tweets from those collected using the API and the remaining 443 tweets from those provided by SNCF."
],
[
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types. Table TABREF12 provides a description for each type of entity that we made available both to the annotators and to the participants of the challenge.",
"Mentions (strings starting with @) and hashtags (strings starting with #) have a particular function in tweets. The former is used to refer to persons while the latter to indicate keywords. Therefore, in the annotation process we treated them using the following protocol: A hashtag or a mention should be annotated as an entity if:",
"For a hashtag or a mention to be annotated both conditions are to be met. Figure FIGREF16 elaborates on that:",
"We measure the inter-annotator agreement between the annotators based on the Cohen's Kappa (cf. Table TABREF15 ) calculated on the first 200 tweets of the training set. According to BIBREF1 our score for Cohen's Kappa (0,70) indicates a strong agreement.",
"In the example given in Figure FIGREF20 :",
"[name=M1, matrix of nodes, row sep=10pt, column sep=3pt,ampersand replacement=&] schema) [text=black] Il; & schema-spezifisch) [text=black] rejoint; & nutzerinfo) [text=frenchrose] Pierre; & host) [text=frenchrose] Fabre;",
"query) [text=black] comme; & fragment) [text=black] directeur; & text=black] des; & text=black] marques;",
"ducray) [text=magenta] Ducray; & text=black] et; & a) [text=magenta] A;& text=magenta] -; & derma) [text=magenta] Derma;",
"; [overbrace style] (nutzerinfo.north west) – (host.north east) node [overbrace text style,rectangle,draw,color=white,rounded corners,inner sep=4pt, fill=frenchrose] Group; [underbrace style] (ducray.south west) – (ducray.south east) node [underbrace text style,rectangle,draw=black,color=white,rounded corners,inner sep=4pt, fill=magenta] Brand; [underbrace style] (a.south west) – (derma.south east) node [underbrace text style,rectangle,draw,color=white,rounded corners,inner sep=4pt, fill=magenta] Brand;",
"A given entity must be annotated with one label. The annotator must therefore choose the most relevant category according to the semantics of the message. We can therefore find in the dataset an entity annotated with different labels. For instance, Facebook can be categorized as a media (“notre page Facebook\") as well as an organization (“Facebook acquires acquiert Nascent Objects\").",
"Event-named entities must include the type of the event. For example, colloque (colloquium) must be annotated in “le colloque du Réveil français est rejoint par\".",
"Abbreviations must be annotated. For example, LMP is the abbreviation of “Le Meilleur Patissier\" which is a tvshow.",
"As shown in Figure 1, the training and the test set have a similar distribution in terms of named entity types. The training set contains 2,902 entities among 1,656 unique entities (i.e. 57,1%). The test set contains 3,660 entities among 2,264 unique entities (i.e. 61,8%). Only 15,7% of named entities are in both datasets (i.e. 307 named entities). Finally we notice that less than 2% of seen entities are ambiguous on the testset."
],
[
"Overall, the results of 8 systems were submitted for evaluation. Among them, 7 submitted a paper discussing their implementation details. The participants proposed a variety of approaches principally using Deep Neural Networks (DNN) and Conditional Random Fields (CRF). In the rest of the section we provide a short overview for the approaches used by each system and discuss the achieved scores.",
"Submission 1 BIBREF2 The system relies on a recurrent neural network (RNN). More precisely, a bi-directional GRU network is used and a CRF layer is adde on top of the network to improve label prediction given information from the context of a word, that is the previous and next tags.",
"Submission 2 BIBREF3 The system follows a state-of-the-art approach by using a CRF for to tag sentences with NER tags. The authors develop a set of features divided into six families (orthographic, morphosyntactic, lexical, syntactic, polysemic traits, and language-modeling traits).",
"Submission 3 BIBREF4 , ranked first, employ CRF as a learning model. In the feature engineering process they use morphosyntactic features, distributional ones as well as word clusters based on these learned representations.",
"Submission 4 BIBREF5 The system also relies on a CRF classifier operating on features extracted for each word of the tweet such as POS tags etc. In addition, they employ an existing pattertn mining NER system (mXS) which is not trained for tweets. The addition of the system's results in improving the recall at the expense of precision.",
"Submission 5 BIBREF6 The authors propose a bidirectional LSTM neural network architecture embedding words, capitalization features and character embeddings learned with convolutional neural networks. This basic model is extended through a transfer learning approach in order to leverage English tweets and thus overcome data sparsity issues.",
"Submission 6 BIBREF7 The approach proposed here used adaptations for tailoring a generic NER system in the context of tweets. Specifically, the system is based on CRF and relies on features provided by context, POS tags, and lexicon. Training has been done using CAP data but also ESTER2 and DECODA available data. Among possible combinations, the best one used CAP data only and largely relied on a priori data.",
"Submission 7 Lastly, BIBREF8 uses a rule based system which performs several linguistic analysis like morphological and syntactic as well as the extraction of relations. The dictionaries used by the system was augmented with new entities from the Web. Finally, linguistics rules were applied in order to tag the detected entities."
],
[
"Table TABREF22 presents the ranking of the systems with respect to their F1-score as well as the precision and recall scores.",
"The approach proposed by BIBREF4 topped the ranking showing how a standard CRF approach can benefit from high quality features. On the other hand, the second best approach does not require heavy feature engineering as it relies on DNNs BIBREF2 .",
"We also observe that the majority of the systems obtained good scores in terms of F1-score while having important differences in precision and recall. For example, the Lattice team achieved the highest precision score."
],
[
"In this paper we presented the challenge on French Twitter Named Entity Recognition. A large corpus of around 6,000 tweets were manyally annotated for the purposes of training and evaluation. To the best of our knowledge this is the first corpus in French for NER in short and noisy texts. A total of 8 teams participated in the competition, employing a variety of state-of-the-art approaches. The evaluation of the systems helped us to reveal the strong points and the weaknesses of these approaches and to suggest potential future directions. "
]
],
"section_name": [
"Introduction",
"Challenge Description",
"Guidelines for Participation",
"The Released Dataset",
"Annotation",
"Description of the Systems",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"80162e75a63790c5c4b15bd08442b5c55b020366",
"e712a1884708d1d1d10795b96f7c22d434a816cd"
],
"answer": [
{
"evidence": [
"Submission 3 BIBREF4 , ranked first, employ CRF as a learning model. In the feature engineering process they use morphosyntactic features, distributional ones as well as word clusters based on these learned representations."
],
"extractive_spans": [],
"free_form_answer": "CRF model that used morphosyntactic and distributional features, as well as word clusters based on these learned representations.",
"highlighted_evidence": [
"Submission 3 BIBREF4 , ranked first, employ CRF as a learning model. In the feature engineering process they use morphosyntactic features, distributional ones as well as word clusters based on these learned representations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Submission 3 BIBREF4 , ranked first, employ CRF as a learning model. In the feature engineering process they use morphosyntactic features, distributional ones as well as word clusters based on these learned representations."
],
"extractive_spans": [
"employ CRF as a learning model. In the feature engineering process they use morphosyntactic features, distributional ones as well as word clusters based on these learned representations."
],
"free_form_answer": "",
"highlighted_evidence": [
"Submission 3 BIBREF4 , ranked first, employ CRF as a learning model. In the feature engineering process they use morphosyntactic features, distributional ones as well as word clusters based on these learned representations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2b4f21599eb5f15e4cd371ee55ad25bd0230c0fa",
"ddfe361cca520cfc562d0815bd13c78cb672523c"
],
"answer": [
{
"evidence": [
"As shown in Figure 1, the training and the test set have a similar distribution in terms of named entity types. The training set contains 2,902 entities among 1,656 unique entities (i.e. 57,1%). The test set contains 3,660 entities among 2,264 unique entities (i.e. 61,8%). Only 15,7% of named entities are in both datasets (i.e. 307 named entities). Finally we notice that less than 2% of seen entities are ambiguous on the testset."
],
"extractive_spans": [],
"free_form_answer": "the number of entities, unique entities in the training and test sets",
"highlighted_evidence": [
"As shown in Figure 1, the training and the test set have a similar distribution in terms of named entity types. The training set contains 2,902 entities among 1,656 unique entities (i.e. 57,1%). The test set contains 3,660 entities among 2,264 unique entities (i.e. 61,8%). Only 15,7% of named entities are in both datasets (i.e. 307 named entities). Finally we notice that less than 2% of seen entities are ambiguous on the testset.\n\n"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 1: Distribution of entities across the 13 possible entity types for the training and test data. Overall, 2,902 entities occur in the training data and 3,660 in the test."
],
"extractive_spans": [],
"free_form_answer": "Entity distribution in the training and test data.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: Distribution of entities across the 13 possible entity types for the training and test data. Overall, 2,902 entities occur in the training data and 3,660 in the test."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2da8b20f3b2134a178482f791d16c739acab750e",
"4e23be942e25b3468b4731214ebc17b3306bd5a4"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Cohen’s Kappa for the interannotator agreement. “Ann\" stands for the annotator. The Table is symmetric."
],
"extractive_spans": [],
"free_form_answer": "Average Cohen’s Kappa score of inter-annotator agreement was 0.655",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Cohen’s Kappa for the interannotator agreement. “Ann\" stands for the annotator. The Table is symmetric."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We measure the inter-annotator agreement between the annotators based on the Cohen's Kappa (cf. Table TABREF15 ) calculated on the first 200 tweets of the training set. According to BIBREF1 our score for Cohen's Kappa (0,70) indicates a strong agreement."
],
"extractive_spans": [
"score for Cohen's Kappa (0,70)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We measure the inter-annotator agreement between the annotators based on the Cohen's Kappa (cf. Table TABREF15 ) calculated on the first 200 tweets of the training set. According to BIBREF1 our score for Cohen's Kappa (0,70) indicates a strong agreement."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"78cb273294206d99b352dcd9e3c85211ad3ab6c3",
"a9edb82f54ed893806d30f0e3aa619fb0b75704b"
],
"answer": [
{
"evidence": [
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types. Table TABREF12 provides a description for each type of entity that we made available both to the annotators and to the participants of the challenge."
],
"extractive_spans": [],
"free_form_answer": "determine entities and annotate them based on the description that matched the type of entity",
"highlighted_evidence": [
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types. Table TABREF12 provides a description for each type of entity that we made available both to the annotators and to the participants of the challenge."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types. Table TABREF12 provides a description for each type of entity that we made available both to the annotators and to the participants of the challenge."
],
"extractive_spans": [],
"free_form_answer": "Identify the entities occurring in the dataset and annotate them with one of the 13 possible types.",
"highlighted_evidence": [
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"151308d05e0c72153a5fc57a6188339a6a9c4055",
"f4d023402d81ea4ab333a9aec001df908030db2d"
],
"answer": [
{
"evidence": [
"Named Entity Recognition (NER) is a fundamental step for most of the information extraction pipelines. Importantly, the terse and difficult text style of tweets presents serious challenges to NER systems, which are usually trained using more formal text sources such as newswire articles or Wikipedia entries that follow particular morpho-syntactic rules. As a result, off-the-self tools trained on such data perform poorly BIBREF0 . The problem becomes more intense as the number of entities to be identified increases, moving from the traditional setting of very few entities (persons, organization, time, location) to problems with more. Furthermore, most of the resources (e.g., software tools) and benchmarks for NER are for text written in English. As the multilingual content online increases, and English may not be anymore the lingua franca of the Web. Therefore, having resources and benchmarks in other languages is crucial for enabling information access worldwide."
],
"extractive_spans": [],
"free_form_answer": "tweets contain informal text with multilingual content that becomes more difficult to classify when there are more options to choose from",
"highlighted_evidence": [
"Named Entity Recognition (NER) is a fundamental step for most of the information extraction pipelines. Importantly, the terse and difficult text style of tweets presents serious challenges to NER systems, which are usually trained using more formal text sources such as newswire articles or Wikipedia entries that follow particular morpho-syntactic rules. As a result, off-the-self tools trained on such data perform poorly BIBREF0 . The problem becomes more intense as the number of entities to be identified increases, moving from the traditional setting of very few entities (persons, organization, time, location) to problems with more. Furthermore, most of the resources (e.g., software tools) and benchmarks for NER are for text written in English. As the multilingual content online increases, and English may not be anymore the lingua franca of the Web. Therefore, having resources and benchmarks in other languages is crucial for enabling information access worldwide."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Named Entity Recognition (NER) is a fundamental step for most of the information extraction pipelines. Importantly, the terse and difficult text style of tweets presents serious challenges to NER systems, which are usually trained using more formal text sources such as newswire articles or Wikipedia entries that follow particular morpho-syntactic rules. As a result, off-the-self tools trained on such data perform poorly BIBREF0 . The problem becomes more intense as the number of entities to be identified increases, moving from the traditional setting of very few entities (persons, organization, time, location) to problems with more. Furthermore, most of the resources (e.g., software tools) and benchmarks for NER are for text written in English. As the multilingual content online increases, and English may not be anymore the lingua franca of the Web. Therefore, having resources and benchmarks in other languages is crucial for enabling information access worldwide."
],
"extractive_spans": [],
"free_form_answer": "NER systems are usually trained using texts that follow particular morpho-syntactic rules. The tweets have a different style and don't follow these rules.",
"highlighted_evidence": [
"Importantly, the terse and difficult text style of tweets presents serious challenges to NER systems, which are usually trained using more formal text sources such as newswire articles or Wikipedia entries that follow particular morpho-syntactic rules.",
"The problem becomes more intense as the number of entities to be identified increases, moving from the traditional setting of very few entities (persons, organization, time, location) to problems with more."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7ab4763fd0e3a1670e786cdebf4d0a0f8c24ae79",
"d48eea65db630d221ef87939d02f60e705a45e1f"
],
"answer": [
{
"evidence": [
"Complementary to the collection of tweets using the Twitter API, we used 886 tweets provided by the “Société Nationale des Chemins de fer Français” (SNCF), that is the French National Railway Corporation. The latter subset is biased towards information in the interest of the corporation such as train lines or names of train stations. To account for the different distribution of entities in the tweets collected by SNCF we incorporated them in the data as follows:",
"For the training set, which comprises 3,000 tweets, we used 2,557 tweets collected using the API and 443 tweets of those provided by SNCF.",
"For the test set, which comprises 3,685 consists we used 3,242 tweets from those collected using the API and the remaining 443 tweets from those provided by SNCF.",
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types. Table TABREF12 provides a description for each type of entity that we made available both to the annotators and to the participants of the challenge."
],
"extractive_spans": [],
"free_form_answer": "The tweets were gathered using Twitter API plus tweets provided by the French National Railway Corporation; the tweets were split into training and test sets, and then annotated.",
"highlighted_evidence": [
"Complementary to the collection of tweets using the Twitter API, we used 886 tweets provided by the “Société Nationale des Chemins de fer Français” (SNCF), that is the French National Railway Corporation. ",
"For the training set, which comprises 3,000 tweets, we used 2,557 tweets collected using the API and 443 tweets of those provided by SNCF.",
"For the test set, which comprises 3,685 consists we used 3,242 tweets from those collected using the API and the remaining 443 tweets from those provided by SNCF.",
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The paper is organized in two parts. In the first, we discuss the data preparation steps (collection, annotation) and we describe the proposed dataset. The dataset was first released in the framework of the CAp 2017 challenge, where 8 systems participated. Following, the second part of the paper presents an overview of baseline systems and the approaches employed by the systems that participated. We conclude with a discussion of the performance of Twitter NER systems and remarks for future work.",
"To collect the tweets that were used to construct the dataset we relied on the Twitter streaming API. The API makes available a part of Twitter flow and one may use particular keywords to filter the results. In order to collect tweets written in French and obtain a sample that would be unbiased towards particular types of entities we used common French words like articles, pronouns, and prepositions: “le”,“la”,“de”,“il”,“elle”, etc.. In total, we collected 10,000 unique tweets from September 1st until September the 15th of 2016.",
"Complementary to the collection of tweets using the Twitter API, we used 886 tweets provided by the “Société Nationale des Chemins de fer Français” (SNCF), that is the French National Railway Corporation. The latter subset is biased towards information in the interest of the corporation such as train lines or names of train stations. To account for the different distribution of entities in the tweets collected by SNCF we incorporated them in the data as follows:",
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types. Table TABREF12 provides a description for each type of entity that we made available both to the annotators and to the participants of the challenge."
],
"extractive_spans": [],
"free_form_answer": "collecting tweets both from the Twitter API and from SNCF to the identifying and annotating entities occurring in the tweets",
"highlighted_evidence": [
"The paper is organized in two parts. In the first, we discuss the data preparation steps (collection, annotation) and we describe the proposed dataset. ",
"To collect the tweets that were used to construct the dataset we relied on the Twitter streaming API. The API makes available a part of Twitter flow and one may use particular keywords to filter the results. In order to collect tweets written in French and obtain a sample that would be unbiased towards particular types of entities we used common French words like articles, pronouns, and prepositions: “le”,“la”,“de”,“il”,“elle”, etc.. In total, we collected 10,000 unique tweets from September 1st until September the 15th of 2016.\n\nComplementary to the collection of tweets using the Twitter API, we used 886 tweets provided by the “Société Nationale des Chemins de fer Français” (SNCF), that is the French National Railway Corporation. The latter subset is biased towards information in the interest of the corporation such as train lines or names of train stations. To account for the different distribution of entities in the tweets collected by SNCF we incorporated them in the data as follows:\n\n",
"In the framework of the challenge, we were required to first identify the entities occurring in the dataset and, then, annotate them with of the 13 possible types. Table TABREF12 provides a description for each type of entity that we made available both to the annotators and to the participants of the challenge."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What method did the highest scoring team use?",
"What descriptive statistics are provided about the data?",
"What was the level of inter-annotator agreement?",
"What questions were asked in the annotation process?",
"Why is NER for tweets more challenging as the number of entities increases?",
"What data preparation steps were used to construct the dataset?"
],
"question_id": [
"79258cea30cd6c0662df4bb712bf667589498a1f",
"8e5ce0d2635e7bdec4ba1b8d695cd06790c8cdaa",
"4e568134c896c4616bc7ab4924686d8d59b57ea1",
"55612e92791296baf18013d2c8dd0474f35af770",
"2f23bd86a9e27dcd88007c9058ddfce78a1a377b",
"e0b8a2649e384bbdb17472f8da2c3df4134b1e57"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Description of the 13 entities used to annotate the tweets. The description was made available to the participants for the development of their systems.",
"Figure 1: Distribution of entities across the 13 possible entity types for the training and test data. Overall, 2,902 entities occur in the training data and 3,660 in the test.",
"Table 2: Cohen’s Kappa for the interannotator agreement. “Ann\" stands for the annotator. The Table is symmetric.",
"Table 3: The scores for the evaluation measures used in the challenge for each of the participating systems. The official measure of the challenge was the microaveraged F1 measure. The best performance is shown in bold."
],
"file": [
"4-Table1-1.png",
"4-Figure1-1.png",
"5-Table2-1.png",
"6-Table3-1.png"
]
} | [
"What method did the highest scoring team use?",
"What descriptive statistics are provided about the data?",
"What was the level of inter-annotator agreement?",
"What questions were asked in the annotation process?",
"Why is NER for tweets more challenging as the number of entities increases?",
"What data preparation steps were used to construct the dataset?"
] | [
[
"1707.07568-Description of the Systems-3"
],
[
"1707.07568-Annotation-12",
"1707.07568-4-Figure1-1.png"
],
[
"1707.07568-5-Table2-1.png",
"1707.07568-Annotation-3"
],
[
"1707.07568-Annotation-0"
],
[
"1707.07568-Introduction-1"
],
[
"1707.07568-The Released Dataset-1",
"1707.07568-Introduction-3",
"1707.07568-The Released Dataset-4",
"1707.07568-The Released Dataset-3",
"1707.07568-The Released Dataset-2",
"1707.07568-Annotation-0"
]
] | [
"CRF model that used morphosyntactic and distributional features, as well as word clusters based on these learned representations.",
"Entity distribution in the training and test data.",
"Average Cohen’s Kappa score of inter-annotator agreement was 0.655",
"Identify the entities occurring in the dataset and annotate them with one of the 13 possible types.",
"NER systems are usually trained using texts that follow particular morpho-syntactic rules. The tweets have a different style and don't follow these rules.",
"collecting tweets both from the Twitter API and from SNCF to the identifying and annotating entities occurring in the tweets"
] | 335 |
2001.11316 | Adversarial Training for Aspect-Based Sentiment Analysis with BERT | Aspect-Based Sentiment Analysis (ABSA) deals with the extraction of sentiments and their targets. Collecting labeled data for this task in order to help neural networks generalize better can be laborious and time-consuming. As an alternative, similar data to the real-world examples can be produced artificially through an adversarial process which is carried out in the embedding space. Although these examples are not real sentences, they have been shown to act as a regularization method which can make neural networks more robust. In this work, we apply adversarial training, which was put forward by Goodfellow et al. (2014), to the post-trained BERT (BERT-PT) language model proposed by Xu et al. (2019) on the two major tasks of Aspect Extraction and Aspect Sentiment Classification in sentiment analysis. After improving the results of post-trained BERT by an ablation study, we propose a novel architecture called BERT Adversarial Training (BAT) to utilize adversarial training in ABSA. The proposed model outperforms post-trained BERT in both tasks. To the best of our knowledge, this is the first study on the application of adversarial training in ABSA. | {
"paragraphs": [
[
"Understanding what people are talking about and how they feel about it is valuable especially for industries which need to know the customers' opinions on their products. Aspect-Based Sentiment Analysis (ABSA) is a branch of sentiment analysis which deals with extracting the opinion targets (aspects) as well as the sentiment expressed towards them. For instance, in the sentence The spaghetti was out of this world., a positive sentiment is mentioned towards the target which is spaghetti. Performing these tasks requires a deep understanding of the language. Traditional machine learning methods such as SVM BIBREF2, Naive Bayes BIBREF3, Decision Trees BIBREF4, Maximum Entropy BIBREF5 have long been practiced to acquire such knowledge. However, in recent years due to the abundance of available data and computational power, deep learning methods such as CNNs BIBREF6, BIBREF7, BIBREF8, RNNs BIBREF9, BIBREF10, BIBREF11, and the Transformer BIBREF12 have outperformed the traditional machine learning techniques in various tasks of sentiment analysis. Bidirectional Encoder Representations from Transformers (BERT) BIBREF13 is a deep and powerful language model which uses the encoder of the Transformer in a self-supervised manner to learn the language model. It has been shown to result in state-of-the-art performances on the GLUE benchmark BIBREF14 including text classification. BIBREF1 show that adding domain-specific information to this model can enhance its performance in ABSA. Using their post-trained BERT (BERT-PT), we add adversarial examples to further improve BERT's performance on Aspect Extraction (AE) and Aspect Sentiment Classification (ASC) which are two major tasks in ABSA. A brief overview of these two sub-tasks is given in Section SECREF3.",
"Adversarial examples are a way of fooling a neural network to behave incorrectly BIBREF15. They are created by applying small perturbations to the original inputs. In the case of images, the perturbations can be invisible to human eye, but can cause neural networks to output a completely different response from the true one. Since neural nets make mistakes on these examples, introducing them to the network during the training can improve their performance. This is called adversarial training which acts as a regularizer to help the network generalize better BIBREF0. Due to the discrete nature of text, it is not feasible to produce perturbed examples from the original inputs. As a workaround, BIBREF16 apply this technique to the word embedding space for text classification. Inspired by them and building on the work of BIBREF1, we experiment with adversarial training for ABSA.",
"Our contributions are twofold. First, by carrying out an ablation study on the number of training epochs and the values for dropout in the classification layer, we show that there are values that outperform the specified ones for BERT-PT. Second, we introduce the application of adversarial training in ABSA by proposing a novel architecture which combines adversarial training with the BERT language model for AE and ASC tasks. Our experiments show that the proposed model outperforms the best performance of BERT-PT in both tasks."
],
[
"Since the early works on ABSA BIBREF17, BIBREF18, BIBREF19, several methods have been put forward to address the problem. In this section, we review some of the works which have utilized deep learning techniques.",
"BIBREF20 design a seven-layer CNN architecture and make use of both part of speech tagging and word embeddings as features. BIBREF21 use convolutional neural networks and domain-specific data for AE and ASC. They show that adding the word embeddings produced from the domain-specific data to the general purpose embeddings semantically enriches them regarding the task at hand. In a recent work BIBREF1, the authors also show that using in-domain data can enhance the performance of the state-of-the-art language model (BERT). Similarly, BIBREF22 also fine-tune BERT on domain-specific data for ASC. They perform a two-stage process, first of which is self-supervised in-domain fine-tuning, followed by supervised task-specific fine-tuning. Working on the same task, BIBREF23 apply graph convolutional networks taking into consideration the assumption that in sentences with multiple aspects, the sentiment about one aspect can help determine the sentiment of another aspect.",
"Since its introduction by BIBREF24, attention mechanism has become widely popular in many natural language processing tasks including sentiment analysis. BIBREF25 design a network to transfer aspect knowledge learned from a coarse-grained network which performs aspect category sentiment classification to a fine-grained one performing aspect term sentiment classification. This is carried out using an attention mechanism (Coarse2Fine) which contains an autoencoder that emphasizes the aspect term by learning its representation from the category embedding. Similar to the Transformer, which does away with RNNs and CNNs and use only attention for translation, BIBREF26 design an attention model for ASC with the difference that they use lighter (weight-wise) multi-head attentions for context and target word modeling. Using bidirectional LSTMs BIBREF27, BIBREF28 propose a model that takes into account the history of aspects with an attention block called Truncated History Attention (THA). To capture the opinion summary, they also introduce Selective Transformation Network (STN) which highlights more important information with respect to a given aspect. BIBREF29 approach the aspect extraction in an unsupervised way. Functioning the same way as an autoencoder, their model has been designed to reconstruct sentence embeddings in which aspect-related words are given higher weights through attention mechanism.",
"While adversarial training has been utilized for sentence classification BIBREF16, its effects have not been studied in ABSA. Therefore, in this work, we study the impact of applying adversarial training to the powerful BERT language model."
],
[
"In this section, we give a brief description of two major tasks in ABSA which are called Aspect Extraction (AE) and Aspect Sentiment Classification (ASC). These tasks were sub-tasks of task 4 in SemEval 2014 contest BIBREF30, and since then they have been the focus of attention in many studies.",
"Aspect Extraction. Given a collection of review sentences, the goal is to extract all the terms, such as waiter, food, and price in the case of restaurants, which point to aspects of a larger entity BIBREF30. In order to perform this task, it is usually modeled as a sequence labeling task, where each word of the input is labeled as one of the three letters in {B, I, O}. Label `B' stands for Beginning of the aspect terms, `I' for Inside (aspect terms' continuation), and `O' for Outside or non-aspect terms. The reason for Inside label is that sometimes aspects can contain two or more words and the system has to return all of them as the aspect. In order for a sequence ($s$) of $n$ words to be fed into the BERT architecture, they are represented as",
"$[CLS], w_1, w_2, ..., w_n, [SEP]$",
"where the $[CLS]$ token is an indicator of the beginning of the sequence as well as its sentiment when performing sentiment classification. The $[SEP]$ token is a token to separate a sequence from the subsequent one. Finally, $w_{i}$ are the words of the sequence. After they go through the BERT model, for each item of the sequence, a vector representation of the size 768, size of BERT's hidden layers, is computed. Then, we apply a fully connected layer to classify each word vector as one of the three labels.",
"Aspect Sentiment Classification. Given the aspects with the review sentence, the aim in ASC is to classify the sentiment towards each aspect as Positive, Negative, Neutral. For this task, the input format for the BERT model is the same as in AE. After the input goes through the network, in the last layer the sentiment is represented by the $[CLS]$ token. Then, a fully connected layer is applied to this token representation in order to extract the sentiment."
],
[
"Our model is depicted in Figure FIGREF1. As can be seen, we create adversarial examples from BERT embeddings using the gradient of the loss. Then, we feed the perturbed examples to the BERT encoder to calculate the adversarial loss. In the end, the backpropagation algorithm is applied to the sum of both losses.",
"BERT Word Embedding Layer. The calculation of input embeddings in BERT is carried out using three different embeddings. As shown in Figure FIGREF2, it is computed by summing over token, segment, and position embeddings. Token embedding is the vector representation of each token in the vocabulary which is achieved using WordPiece embeddings BIBREF31. Position embeddings are used to preserve the information about the position of the words in the sentence. Segment embeddings are used in order to distinguish between sentences if there is more than one (e.g. for question answering task there are two). Words belonging to one sentence are labeled the same.",
"BERT Encoder. BERT encoder is constructed by making use of Transformer blocks from the Transformer model. For $\\mathbf {BERT_{BASE}}$, these blocks are used in 12 layers, each of which consists of 12 multi-head attention blocks. In order to make the model aware of both previous and future contexts, BERT uses the Masked Language Model (MLM) where $15\\%$ of the input sentence is masked for prediction.",
"Fully Connected Layer and Loss Function. The job of the fully connected layer in the architecture is to classify the output embeddings of BERT encoder into sentiment classes. Therefore, its size is $768\\times 3$ where the first element is the hidden layers' size of BERT encoder and the second element is the number of classes. For the loss function, we use cross entropy loss implemented in Pytorch.",
"Adversarial Examples. Adversarial examples are created to attack a neural network to make erroneous predictions. There are two main types of adversarial attacks which are called white-box and black-box. White-box attacks BIBREF32 have access to the model parameters, while black-box attacks BIBREF33 work only on the input and output. In this work, we utilize a white-box method working on the embedding level. In order to create adversarial examples, we utilize the formula used by BIBREF16, where the perturbations are created using gradient of the loss function. Assuming $p(y|x;\\theta )$ is the probability of label $y$ given the input $x$ and the model parameters $\\theta $, in order to find the adversarial examples the following minimization problem should be solved:",
"where $r$ denotes the perturbations on the input and $\\hat{\\theta }$ is a constant copy of $\\theta $ in order not to allow the gradients to propagate in the process of constructing the artificial examples. Solving the above minimization problem means that we are searching for the worst perturbations while trying to minimize the loss of the model. An approximate solution for Equation DISPLAY_FORM3 is found by linearizing $\\log p(y|x;\\theta )$ around $x$ BIBREF0. Therefore, the following perturbations are added to the input embeddings to create new adversarial sentences in the embedding space.",
"where",
"and $\\epsilon $ is the size of the perturbations. In order to find values which outperform the original results, we carried out an ablation study on five values for epsilon whose results are presented in Figure FIGREF7 and discussed in Section SECREF6. After the adversarial examples go through the network, their loss is calculated as follows:",
"$- \\log p(y|x + r_{adv};\\theta )$",
"Then, this loss is added to the loss of the real examples in order to compute the model's loss."
],
[
"Datasets. In order for the results to be consistent with previous works, we experimented with the benchmark datasets from SemEval 2014 task 4 BIBREF30 and SemEval 2016 task 5 BIBREF34 competitions. The laptop dataset is taken from SemEval 2014 and is used for both AE and ASC tasks. However, the restaurant dataset for AE is a SemEval 2014 dataset while for ASC is a SemEval 2016 dataset. The reason for the difference is to be consistent with the previous works. A summary of these datasets can be seen in Tables TABREF8 and TABREF8.",
"Implementation details. We performed all our experiments on a GPU (GeForce RTX 2070) with 8 GB of memory. Except for the code specific to our model, we adapted the codebase utilized by BERT-PT. To carry out the ablation study of BERT-PT model, batches of 32 were specified. However, to perform the experiments for our proposed model, we reduced the batch size to 16 in order for the GPU to be able to store our model. For optimization, the Adam optimizer with a learning rate of $3e-5$ was used. From SemEval's training data, 150 examples were chosen for the validation and the remaining was used for training the model.",
"Implementing the creation of adversarial examples for ASC task was slightly different from doing it for AE task. During our experiments, we realized that modifying all the elements of input vectors does not improve the results. Therefore, we decided not to modify the vector for the $[CLS]$ token. Since the $[CLS]$ token is responsible for the class label in the output, it seems reasonable not to change it in the first place and only perform the modification on the word vectors of the input sentence. In other words, regarding the fact that the $[CLS]$ token is the class label, to create an adversarial example, we should only change the words of the sentence, not the ground-truth label.",
"Evaluation. To evaluate the performance of the model, we utilized the official script of the SemEval contest for AE. These results are reported as F1 scores. For ASC, to be consistent with BERT-PT, we utilized their script whose results are reported in Accuracy and Macro-F1 (MF1) measures. Macro-F1 is the average of F1 score for each class and it is used to deal with the issue of unbalanced classes."
],
[
"To perform the ablation study, first we initialize our model with post-trained BERT which has been trained on uncased version of $\\mathbf {BERT_{BASE}}$. We attempt to discover what number of training epochs and which dropout probability yield the best performance for BERT-PT. Since one and two training epochs result in very low scores, results of 3 to 10 training epochs have been depicted for all experiments. For AE, we experiment with 10 different dropout values in the fully connected (linear) layer. The results can be seen in Figure FIGREF6 for laptop and restaurant datasets. To be consistent with the previous work and because of the results having high variance, each point in the figure (F1 score) is the average of 9 runs. In the end, for each number of training epochs, a dropout value, which outperforms the other values, is found. In our experiments, we noticed that the validation loss increases after 2 epochs as has been mentioned in the original paper. However, the test results do not follow the same pattern. Looking at the figures, it can be seen that as the number of training epochs increases, better results are produced in the restaurant domain while in the laptop domain the scores go down. This can be attributed to the selection of validation sets as for both domains the last 150 examples of the SemEval training set were selected. Therefore, it can be said that the examples in the validation and test sets for laptop have more similar patterns than those of restaurant dataset. To be consistent with BERT-PT, we performed the same selection.",
"In order to compare the effect of adversarial examples on the performance of the model, we choose the best dropout for each number of epochs and experiment with five different values for epsilon (perturbation size). The results for laptop and restaurant can be seen in Figure FIGREF7. As is noticeable, in terms of scores, they follow the same pattern as the original ones. Although most of the epsilon values improve the results, it can be seen in Figure FIGREF7 that not all of them will enhance the model's performance. In the case of $\\epsilon =5.0$ for AE, while it boosts the performance in the restaurant domain for most of the training epochs, it negatively affects the performance in the laptop domain. The reason for this could be the creation of adversarial examples which are not similar to the original ones but are labeled the same. In other words, the new examples greatly differ from the original ones but are fed to the net as being similar, leading to the network's poorer performance.",
"Observing, from AE task, that higher dropouts perform poorly, we experiment with the 5 lower values for ASC task in BERT-PT experiments. In addition, for BAT experiments, two different values ($0.01, 0.1$) for epsilon are tested to make them more diverse. The results are depicted in Figures FIGREF9 and FIGREF10 for BERT-PT and BAT, respectively. While in AE, towards higher number of training epochs, there is an upward trend for restaurant and a downward trend for laptop, in ASC a clear pattern is not observed. Regarding the dropout, lower values ($0.1$ for laptop, $0.2$ for restaurant) yield the best results for BERT-PT in AE task, but in ASC a dropout probability of 0.4 results in top performance in both domains. The top performing epsilon value for both domains in ASC, as can be seen in Figure FIGREF10, is 5.0 which is the same as the best value for restaurant domain in AE task. This is different from the top performing $\\epsilon = 0.2$ for laptop in AE task which was mentioned above.",
"From the ablation studies, we extract the best results of BERT-PT and compare them with those of BAT. These are summarized in Tables TABREF11 and TABREF11 for aspect extraction and aspect sentiment classification, respectively. As can be seen in Table TABREF11, the best parameters for BERT-PT have greatly improved its original performance on restaurant dataset (+2.72) compared to laptop (+0.62). Similar improvements can be seen in ASC results with an increase of +2.16 in MF1 score for restaurant compared to +0.81 for laptop which is due to the increase in the number of training epochs for restaurant domain since it exhibits better results with more training while the model reaches its peak performance for laptop domain in earlier training epochs. In addition, applying adversarial training improves the network's performance in both tasks, though at different rates. While for laptop there are similar improvements in both tasks (+0.69 in AE, +0.61 in ASC), for restaurant we observe different enhancements (+0.81 in AE, +0.12 in ASC). This could be attributed to the fact that these are two different datasets whereas the laptop dataset is the same for both tasks. Furthermore, the perturbation size plays an important role in performance of the system. By choosing the appropriate ones, as was shown, better results are achieved."
],
[
"In this paper, we introduced the application of adversarial training in Aspect-Based Sentiment Analysis. The experiments with our proposed architecture show that the performance of the post-trained BERT on aspect extraction and aspect sentiment classification tasks are improved by utilizing adversarial examples during the network training. As future work, other white-box adversarial examples as well as black-box ones will be utilized for a comparison of adversarial training methods for various sentiment analysis tasks. Furthermore, the impact of adversarial training in the other tasks in ABSA namely Aspect Category Detection and Aspect Category Polarity will be investigated."
],
[
"We would like to thank Adidas AG for funding this work."
]
],
"section_name": [
"Introduction",
"Related Work",
"Aspect-Based Sentiment Analysis Tasks",
"Model",
"Experimental Setup",
"Ablation Study and Results Analysis",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"27d7c7223ab53230b7fc4ec44684a3714d4c9461",
"47ca4664a8e656fb8d11f9da421e9e983e78167b"
],
"answer": [
{
"evidence": [
"Datasets. In order for the results to be consistent with previous works, we experimented with the benchmark datasets from SemEval 2014 task 4 BIBREF30 and SemEval 2016 task 5 BIBREF34 competitions. The laptop dataset is taken from SemEval 2014 and is used for both AE and ASC tasks. However, the restaurant dataset for AE is a SemEval 2014 dataset while for ASC is a SemEval 2016 dataset. The reason for the difference is to be consistent with the previous works. A summary of these datasets can be seen in Tables TABREF8 and TABREF8.",
"FLOAT SELECTED: Table 1. Laptop and restaurant datasets for AE. S: Sentences; A: Aspects; Rest16: Restaurant dataset from SemEval 2016.",
"FLOAT SELECTED: Table 2. Laptop and restaurant datasets for ASC. Pos, Neg, Neu: Number of positive, negative, and neutral sentiments, respectively; Rest14: Restaurant dataset from SemEval 2014"
],
"extractive_spans": [],
"free_form_answer": "SemEval 2016 contains 6521 sentences, SemEval 2014 contains 7673 sentences",
"highlighted_evidence": [
"A summary of these datasets can be seen in Tables TABREF8 and TABREF8.",
"FLOAT SELECTED: Table 1. Laptop and restaurant datasets for AE. S: Sentences; A: Aspects; Rest16: Restaurant dataset from SemEval 2016.",
"FLOAT SELECTED: Table 2. Laptop and restaurant datasets for ASC. Pos, Neg, Neu: Number of positive, negative, and neutral sentiments, respectively; Rest14: Restaurant dataset from SemEval 2014"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Datasets. In order for the results to be consistent with previous works, we experimented with the benchmark datasets from SemEval 2014 task 4 BIBREF30 and SemEval 2016 task 5 BIBREF34 competitions. The laptop dataset is taken from SemEval 2014 and is used for both AE and ASC tasks. However, the restaurant dataset for AE is a SemEval 2014 dataset while for ASC is a SemEval 2016 dataset. The reason for the difference is to be consistent with the previous works. A summary of these datasets can be seen in Tables TABREF8 and TABREF8.",
"FLOAT SELECTED: Table 1. Laptop and restaurant datasets for AE. S: Sentences; A: Aspects; Rest16: Restaurant dataset from SemEval 2016.",
"FLOAT SELECTED: Table 2. Laptop and restaurant datasets for ASC. Pos, Neg, Neu: Number of positive, negative, and neutral sentiments, respectively; Rest14: Restaurant dataset from SemEval 2014"
],
"extractive_spans": [],
"free_form_answer": "Semeval 2014 for ASC has total of 2951 and 4722 sentiments for Laptop and Restaurnant respectively, while SemEval 2016 for AE has total of 3857 and 5041 sentences on Laptop and Resaurant respectively.",
"highlighted_evidence": [
"Datasets. In order for the results to be consistent with previous works, we experimented with the benchmark datasets from SemEval 2014 task 4 BIBREF30 and SemEval 2016 task 5 BIBREF34 competitions. The laptop dataset is taken from SemEval 2014 and is used for both AE and ASC tasks. However, the restaurant dataset for AE is a SemEval 2014 dataset while for ASC is a SemEval 2016 dataset. The reason for the difference is to be consistent with the previous works. A summary of these datasets can be seen in Tables TABREF8 and TABREF8.",
"FLOAT SELECTED: Table 1. Laptop and restaurant datasets for AE. S: Sentences; A: Aspects; Rest16: Restaurant dataset from SemEval 2016.",
"FLOAT SELECTED: Table 2. Laptop and restaurant datasets for ASC. Pos, Neg, Neu: Number of positive, negative, and neutral sentiments, respectively; Rest14: Restaurant dataset from SemEval 2014"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1eba4a547407a626a458eae6461e1775b2c00e7a",
"dc1f086b396265f0d9494033757f3e1f5a2525de"
],
"answer": [
{
"evidence": [
"Adversarial Examples. Adversarial examples are created to attack a neural network to make erroneous predictions. There are two main types of adversarial attacks which are called white-box and black-box. White-box attacks BIBREF32 have access to the model parameters, while black-box attacks BIBREF33 work only on the input and output. In this work, we utilize a white-box method working on the embedding level. In order to create adversarial examples, we utilize the formula used by BIBREF16, where the perturbations are created using gradient of the loss function. Assuming $p(y|x;\\theta )$ is the probability of label $y$ given the input $x$ and the model parameters $\\theta $, in order to find the adversarial examples the following minimization problem should be solved:",
"where $r$ denotes the perturbations on the input and $\\hat{\\theta }$ is a constant copy of $\\theta $ in order not to allow the gradients to propagate in the process of constructing the artificial examples. Solving the above minimization problem means that we are searching for the worst perturbations while trying to minimize the loss of the model. An approximate solution for Equation DISPLAY_FORM3 is found by linearizing $\\log p(y|x;\\theta )$ around $x$ BIBREF0. Therefore, the following perturbations are added to the input embeddings to create new adversarial sentences in the embedding space."
],
"extractive_spans": [
"we are searching for the worst perturbations while trying to minimize the loss of the model"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this work, we utilize a white-box method working on the embedding level. In order to create adversarial examples, we utilize the formula used by BIBREF16, where the perturbations are created using gradient of the loss function. Assuming $p(y|x;\\theta )$ is the probability of label $y$ given the input $x$ and the model parameters $\\theta $, in order to find the adversarial examples the following minimization problem should be solved:\n\nwhere $r$ denotes the perturbations on the input and $\\hat{\\theta }$ is a constant copy of $\\theta $ in order not to allow the gradients to propagate in the process of constructing the artificial examples. Solving the above minimization problem means that we are searching for the worst perturbations while trying to minimize the loss of the model. An approximate solution for Equation DISPLAY_FORM3 is found by linearizing $\\log p(y|x;\\theta )$ around $x$ BIBREF0. Therefore, the following perturbations are added to the input embeddings to create new adversarial sentences in the embedding space."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Adversarial Examples. Adversarial examples are created to attack a neural network to make erroneous predictions. There are two main types of adversarial attacks which are called white-box and black-box. White-box attacks BIBREF32 have access to the model parameters, while black-box attacks BIBREF33 work only on the input and output. In this work, we utilize a white-box method working on the embedding level. In order to create adversarial examples, we utilize the formula used by BIBREF16, where the perturbations are created using gradient of the loss function. Assuming $p(y|x;\\theta )$ is the probability of label $y$ given the input $x$ and the model parameters $\\theta $, in order to find the adversarial examples the following minimization problem should be solved:"
],
"extractive_spans": [],
"free_form_answer": "By using a white-box method using perturbation calculated based on the gradient of the loss function.",
"highlighted_evidence": [
"In this work, we utilize a white-box method working on the embedding level. In order to create adversarial examples, we utilize the formula used by BIBREF16, where the perturbations are created using gradient of the loss function. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"153d79447c5d921a94c41bac69945ccd03362103",
"60e2e6106315df63586db9d08be21e82c35dbb1b"
],
"answer": [
{
"evidence": [
"To perform the ablation study, first we initialize our model with post-trained BERT which has been trained on uncased version of $\\mathbf {BERT_{BASE}}$. We attempt to discover what number of training epochs and which dropout probability yield the best performance for BERT-PT. Since one and two training epochs result in very low scores, results of 3 to 10 training epochs have been depicted for all experiments. For AE, we experiment with 10 different dropout values in the fully connected (linear) layer. The results can be seen in Figure FIGREF6 for laptop and restaurant datasets. To be consistent with the previous work and because of the results having high variance, each point in the figure (F1 score) is the average of 9 runs. In the end, for each number of training epochs, a dropout value, which outperforms the other values, is found. In our experiments, we noticed that the validation loss increases after 2 epochs as has been mentioned in the original paper. However, the test results do not follow the same pattern. Looking at the figures, it can be seen that as the number of training epochs increases, better results are produced in the restaurant domain while in the laptop domain the scores go down. This can be attributed to the selection of validation sets as for both domains the last 150 examples of the SemEval training set were selected. Therefore, it can be said that the examples in the validation and test sets for laptop have more similar patterns than those of restaurant dataset. To be consistent with BERT-PT, we performed the same selection."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To perform the ablation study, first we initialize our model with post-trained BERT which has been trained on uncased version of $\\mathbf {BERT_{BASE}}$. We attempt to discover what number of training epochs and which dropout probability yield the best performance for BERT-PT. "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1a2a87537a68a997e37f61352e58af56189401cd",
"fd73dcaa17ef7b3479090d2128cae27ecb0818e9"
],
"answer": [
{
"evidence": [
"Our model is depicted in Figure FIGREF1. As can be seen, we create adversarial examples from BERT embeddings using the gradient of the loss. Then, we feed the perturbed examples to the BERT encoder to calculate the adversarial loss. In the end, the backpropagation algorithm is applied to the sum of both losses."
],
"extractive_spans": [
"adversarial examples from BERT embeddings using the gradient of the loss",
"we feed the perturbed examples to the BERT encoder "
],
"free_form_answer": "",
"highlighted_evidence": [
"As can be seen, we create adversarial examples from BERT embeddings using the gradient of the loss. Then, we feed the perturbed examples to the BERT encoder to calculate the adversarial loss. In the end, the backpropagation algorithm is applied to the sum of both losses."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Understanding what people are talking about and how they feel about it is valuable especially for industries which need to know the customers' opinions on their products. Aspect-Based Sentiment Analysis (ABSA) is a branch of sentiment analysis which deals with extracting the opinion targets (aspects) as well as the sentiment expressed towards them. For instance, in the sentence The spaghetti was out of this world., a positive sentiment is mentioned towards the target which is spaghetti. Performing these tasks requires a deep understanding of the language. Traditional machine learning methods such as SVM BIBREF2, Naive Bayes BIBREF3, Decision Trees BIBREF4, Maximum Entropy BIBREF5 have long been practiced to acquire such knowledge. However, in recent years due to the abundance of available data and computational power, deep learning methods such as CNNs BIBREF6, BIBREF7, BIBREF8, RNNs BIBREF9, BIBREF10, BIBREF11, and the Transformer BIBREF12 have outperformed the traditional machine learning techniques in various tasks of sentiment analysis. Bidirectional Encoder Representations from Transformers (BERT) BIBREF13 is a deep and powerful language model which uses the encoder of the Transformer in a self-supervised manner to learn the language model. It has been shown to result in state-of-the-art performances on the GLUE benchmark BIBREF14 including text classification. BIBREF1 show that adding domain-specific information to this model can enhance its performance in ABSA. Using their post-trained BERT (BERT-PT), we add adversarial examples to further improve BERT's performance on Aspect Extraction (AE) and Aspect Sentiment Classification (ASC) which are two major tasks in ABSA. A brief overview of these two sub-tasks is given in Section SECREF3."
],
"extractive_spans": [],
"free_form_answer": "They added adversarial examples in training to improve the post-trained BERT model",
"highlighted_evidence": [
"Using their post-trained BERT (BERT-PT), we add adversarial examples to further improve BERT's performance on Aspect Extraction (AE) and Aspect Sentiment Classification (ASC) which are two major tasks in ABSA."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"76107b2d8cc718ebcdd5eae948aec90d6551d7ad",
"df92813447243202d0a3cb4472f3102050bc188a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How long is the dataset?",
"How are adversarial examples generated?",
"Is BAT smaller (in number of parameters) than post-trained BERT?",
"What are the modifications made to post-trained BERT?",
"What aspects are considered?"
],
"question_id": [
"16b816925567deb734049416c149747118e13963",
"9b536f4428206ef7afabc4ff0a2ebcbabd68b985",
"9d04fc997689f44e5c9a551b8571a60b621d35c2",
"8a0e1a298716698a305153c524bf03d18969b1c6",
"538430077b1820011c609c8ae147389b960932c8"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"BERT Sentiment Analysis",
"BERT Sentiment Analysis",
"BERT Sentiment Analysis",
"BERT Sentiment Analysis",
"BERT Sentiment Analysis"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1. The proposed architecture: BERT Adversarial Training (BAT)",
"Figure 2. BERT word embedding layer (Devlin et al., 2018)",
"Figure 3. Ablation results on the impact of training epochs and dropout value in post-trained BERT for AE task.",
"Figure 4. Comparing best results of BERT-PT and BAT with different sizes of perturbations ( ) for AE task.",
"Table 1. Laptop and restaurant datasets for AE. S: Sentences; A: Aspects; Rest16: Restaurant dataset from SemEval 2016.",
"Table 2. Laptop and restaurant datasets for ASC. Pos, Neg, Neu: Number of positive, negative, and neutral sentiments, respectively; Rest14: Restaurant dataset from SemEval 2014",
"Figure 5. Ablation results on the impact of training epochs and dropout value in post-trained BERT for ASC task.",
"Figure 6. Comparing best results of BERT-PT and BAT with different sizes of perturbations ( ) for ASC task.",
"Table 3. Aspect extraction (AE) results",
"Table 4. Aspect sentiment classification (ASC) results. Acc: Accuracy; MF1: Macro-F1."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Figure5-1.png",
"5-Figure6-1.png",
"6-Table3-1.png",
"6-Table4-1.png"
]
} | [
"How long is the dataset?",
"How are adversarial examples generated?",
"What are the modifications made to post-trained BERT?"
] | [
[
"2001.11316-4-Table2-1.png",
"2001.11316-Experimental Setup-0",
"2001.11316-4-Table1-1.png"
],
[
"2001.11316-Model-5",
"2001.11316-Model-4"
],
[
"2001.11316-Model-0",
"2001.11316-Introduction-0"
]
] | [
"Semeval 2014 for ASC has total of 2951 and 4722 sentiments for Laptop and Restaurnant respectively, while SemEval 2016 for AE has total of 3857 and 5041 sentences on Laptop and Resaurant respectively.",
"By using a white-box method using perturbation calculated based on the gradient of the loss function.",
"They added adversarial examples in training to improve the post-trained BERT model"
] | 338 |
1610.07149 | Two are Better than One: An Ensemble of Retrieval- and Generation-Based Dialog Systems | Open-domain human-computer conversation has attracted much attention in the field of NLP. Contrary to rule- or template-based domain-specific dialog systems, open-domain conversation usually requires data-driven approaches, which can be roughly divided into two categories: retrieval-based and generation-based systems. Retrieval systems search a user-issued utterance (called a query) in a large database, and return a reply that best matches the query. Generative approaches, typically based on recurrent neural networks (RNNs), can synthesize new replies, but they suffer from the problem of generating short, meaningless utterances. In this paper, we propose a novel ensemble of retrieval-based and generation-based dialog systems in the open domain. In our approach, the retrieved candidate, in addition to the original query, is fed to an RNN-based reply generator, so that the neural model is aware of more information. The generated reply is then fed back as a new candidate for post-reranking. Experimental results show that such ensemble outperforms each single part of it by a large margin. | {
"paragraphs": [
[
"Automatic dialog/conversation systems have served humans for a long time in various fields, ranging from train routing nbcitetrain to museum guiding nbcitemuseum. In the above scenarios, the dialogs are domain-specific, and a typical approach to such in-domain systems is by human engineering, for example, using manually constructed ontologies nbciteyoungsigdial, natural language templates nbcitetemplate, and even predefined dialog states nbcitestatetracking.",
"Recently, researchers have paid increasing attention to open-domain, chatbot-style human-computer conversation, because of its important commercial applications, and because it tackles the real challenges of natural language understanding and generation nbciteretrieval1,acl,aaai. For open-domain dialogs, rules and temples would probably fail as we can hardly handle the great diversity of dialog topics and natural language sentences. With the increasing number of human-human conversation utterances available on the Internet, previous studies have developed data-oriented approaches in the open domain, which can be roughly categorized into two groups: retrieval systems and generative systems.",
"When a user issues an utterance (called a query), retrieval systems search for a most similar query in a massive database (which consists of large numbers of query-reply pairs), and respond to the user with the corresponding reply nbciteretrieval1,retrieval2. Through information retrieval, however, we cannot obtain new utterances, that is, all replies have to appear in the database. Also, the ranking of candidate replies is usually judged by surface forms (e.g., word overlaps, tf $\\cdot $ idf features) and hardly addresses the real semantics of natural languages.",
"Generative dialog systems, on the other hand, can synthesize a new sentence as the reply by language models nbciteBoWdialog,acl,aaai. Typically, a recurrent neural network (RNN) captures the query's semantics with one or a few distributed, real-valued vectors (also known as embeddings); another RNN decodes the query embeddings to a reply. Deep neural networks allow complicated interaction by multiple non-linear transformations; RNNs are further suitable for modeling time-series data (e.g., a sequence of words) especially when enhanced with long short term memory (LSTM) or gated recurrent units (GRUs). Despite these, RNN also has its own weakness when applied to dialog systems: the generated sentence tends to be short, universal, and meaningless, for example, “I don't know” nbcitenaacl or “something” nbciteaaai. This is probably because chatbot-like dialogs are highly diversified and a query may not convey sufficient information for the reply. Even though such universal utterances may be suited in certain dialog context, they make users feel boring and lose interest, and thus are not desirable in real applications.",
"In this paper, we are curious if we can combine the above two streams of approaches for open-domain conversation. To this end, we propose an ensemble of retrieval and generative dialog systems. Given a user-issued query, we first obtain a candidate reply by information retrieval from a large database. The query, along with the candidate reply, is then fed to an utterance generator based on the “bi-sequence to sequence” (biseq2seq) model nbcitemultiseq2seq. Such sequence generator takes into consideration the information contained in not only the query but also the retrieved reply; hence, it alleviates the low-substance problem and can synthesize replies that are more meaningful. After that we use the scorer in the retrieval system again for post-reranking. This step can filter out less relevant retrieved replies or meaningless generated ones. The higher ranked candidate (either retrieved or generated) is returned to the user as the reply.",
"From the above process, we see that the retrieval and generative systems are integrated by two mechanisms: (1) The retrieved candidate is fed to the sequence generator to mitigate the “low-substance” problem; (2) The post-reranker can make better use of both the retrieved candidate and the generated utterance. In this sense, we call our overall approach an ensemble in this paper. To the best of our knowledge, we are the first to combine retrieval and generative models for open-domain conversation.",
"Experimental results show that our ensemble model consistently outperforms each single component in terms of several subjective and objective metrics, and that both retrieval and generative methods contribute an important portion to the overall approach. This also verifies the rationale for building model ensembles for dialog systems."
],
[
"Figure 1 depicts the overall framework of our proposed ensemble of retrieval and generative dialog systems. It mainly consists of the following components.",
"When a user sends a query utterance $q$ , our approach utilizes a state-of-the-practice information retrieval system to search for a query-reply pair $\\langle q^*, r^*\\rangle $ that best matches the user-issued query $q$ . The corresponding $r^*$ is retrieved as a candidate reply.",
"Then a biseq2seq model takes the original query $q$ and the retrieved candidate reply $r^*$ as input, each sequence being transformed to a fixed-size vector. These two vectors are concatenated and linearly transformed as the initial state of the decoder, which generates a new utterance $r^\\text{+}$ as another candidate reply.",
"Finally, we use a reranker (which is a part of the retrieval system) to select either $r^*$ or $r^\\text{+}$ as the ultimate response to the original query $q$ .",
"In the rest of this section, we describe each component in detail."
],
[
"Information retrieval is among prevailing techniques for open-domain, chatbot-style human-computer conversation nbciteretrieval1,retrieval2.",
"We utilize a state-of-the-practice retrieval system with extensive manual engineering and on a basis of tens of millions of existing human-human utterance pairs. Basically, it works in a two-step retrieval-and-ranking strategy, similar to the Lucene and Solr systems.",
"First, a user-issued utterance is treated as bag-of-words features with stop-words being removed. After querying it in a knowledge base, we obtain a list containing up to 1000 query-reply pairs $\\langle q^*, r^*\\rangle $ , whose queries share most words as the input query $q$ . This step retrieves coarse-grained candidates efficiently, which is accomplished by an inversed index.",
"Then, we measure the relatedness between the query $q$ and each $\\langle q^*, r^*\\rangle $ pair in a fine-grained fashion. In our system, both $q$ - $q^*$ and $q$ - $r^*$ relevance scores are considered. A classifier judges whether $q$ matches $q^*$ and $r^*$ ; its confidence degree is used as the scorer. We have tens of features, and several important ones include word overlap ratio, the cosine measure of a pretrained topic model coefficients, and the cosine measures of word embedding vectors. (Details are beyond the scope of this paper; any well-designed retrieval system might fit into our framework.)",
"In this way, we obtain a query-reply pair $\\langle q^*, r^*\\rangle $ that best matches the original query $q$ ; the corresponding utterance $r^*$ is considered as a candidate reply retrieved from the database."
],
[
"Using neural networks to build end-to-end trainable dialog systems has become a new research trend in the past year. A generative dialog system can synthesize new utterances, which is complementary to retrieval-based methods.",
"Typically, an encoder-decoder architecture is applied to encode a query as vectors and to decode the vectors to a reply utterance. With recurrent neural networks (RNNs) as the encoder and decoder, such architecture is also known as a seq2seq model, which has wide applications in neural machine translation nbciteseq2seq, abstractive summarization nbcitesummarization, etc. That being said, previous studies indicate seq2seq has its own shortcoming for dialog systems.nbciteseq2BF suggests that, in open-domain conversation systems, the query does not carry sufficient information for the reply; that the seq2seq model thus tends to generate short and meaningless sentences with little substance.",
"To address this problem, we adopt a biseq2seq model, which is proposed in nbcitemultiseq2seq for multi-source machine translation. The biseq2seq model takes into consideration the retrieved reply as a reference in addition to query information (Figure 2 ). Hence, the generated reply can be not only fluent and logical with respect to the query, but also meaningful as it is enhanced by a retrieved candidate.",
"Specifically, we use an RNN with gated recurrent units (GRUs) for sequence modeling. Let $\\mathbf {x}_t$ be the word embeddings of the time step $t$ and $\\mathbf {h}_{t-1}$ be the previous hidden state of RNN. We have ",
"$$\\mathbf {r}_t &= \\sigma (W_r\\mathbf {x}_t+ U_r\\mathbf {h}_{t-1} + \\mathbf {b}_r)\\\\\n\\mathbf {z}_t &= \\sigma (W_z\\mathbf {x}_t+ U_r\\mathbf {h}_{t-1} + \\mathbf {b}_z)\\\\\n\\tilde{\\mathbf {h}}_t &= \\tanh \\big (W_x\\mathbf {x}_t+ U_x (\\mathbf {r}_t \\circ \\mathbf {h}_{t-1})\\big )\\\\\n\\mathbf {h}_t &= (1-\\mathbf {z}_t)\\circ \\mathbf {h}_{t-1} + \\mathbf {z}_t \\circ \\tilde{\\mathbf {h}}_t$$ (Eq. 10) ",
" where $\\mathbf {r}_t$ and $\\mathbf {z}_t$ are known as gates, $W$ 's and $\\mathbf {b}$ 's are parameters, and “ $\\circ $ ” refers to element-wise product.",
"After two RNNs go through $q$ and $r^*$ , respectively, we obtain two vectors capturing their meanings. We denote them as bold letters $\\mathbf {q}$ and $\\mathbf {r}^*$ , which are concatenated as $[\\mathbf {q}; \\mathbf {r}^*]$ and linearly transformed before being fed to the decoder as the initial state.",
"During reply generation, we also use GRU-RNN, given by Equations 10 –. But at each time step, a softmax layer outputs the probability that a word would occur in the next step, i.e., ",
"$$p(w_i|\\mathbf {h}_t) = \\frac{\\exp \\left\\lbrace W_i^\\top \\mathbf {h}_t\\right\\rbrace +\\mathbf {b}}{\\sum _j\\exp \\left\\lbrace W_j^\\top \\mathbf {h}_t+\\mathbf {b}\\right\\rbrace }$$ (Eq. 11) ",
"where $W_i$ is the $i$ -th row of the output weight matrix (corresponding to $w_i$ ) and $\\mathbf {b}$ is a bias term.",
"Notice that we assign different sets of parameters—indicated by three colors in Figure 2 —for the two encoders ( $q$ and $r^*$ ) and the decoder ( $r^+$ ). This treatment is because the RNNs' semantics differ significantly from one another (even between the two encoders)."
],
[
"Now that we have a retrieved candidate reply $r^*$ as well as a generated one $r^+$ , we select one as the final reply by the $q$ - $r$ scorer in the retrieval-based dialog system (described in previous sections and not repeated here).",
"Using manually engineered features, this step can eliminate either meaningless short replies that are unfortunately generated by biseq2seq or less relevant replies given by the retrieval system. We call this a post-reranker in our model ensemble."
],
[
"We train each component separately because the retrieval part is not end-to-end learnable.",
"In the retrieval system, we use the classifier's confidence as the relevance score. The training set consists of 10k samples, which are either in the original human-human utterance pairs or generated by negative sampling. We made efforts to collect binary labels from a crowd-sourcing platform, indicating whether a query is relevant to another query and whether it is relevant to a particular reply. We find using crowd-sourced labels results in better performance than original negative sampling.",
"For biseq2seq, we use human-human utterance pairs $\\langle q, r\\rangle $ as data samples. A retrieved candidate $r^*$ is also provided as the input when we train the neural network. Standard cross-entropy loss of all words in the reply is applied as the training objective. For a particular training sample whose reply is of length $T$ , the cost is ",
"$$J = -\\sum _{i=1}^T\\sum _{j=1}^{V}{t_j^{(i)}\\log {y_j^{(i)}}}$$ (Eq. 15) ",
"where $\\mathbf {t}^{(i)}$ is the one-hot vector of the next target word, serving as the groundtruth, $\\mathbf {y}$ is the output probability by softmax, and $V$ is the vocabulary size. We adopt mini-batched AdaDelta nbciteadadelta for optimization."
],
[
"In this section, we evaluate our model ensemble on Chinese (language) human-computer conversation. We first describe the datasets and settings. Then we compare our approach with strong baselines."
],
[
"Typically, a very large database of query-reply pairs is a premise for a successful retrieval-based conversation system, because the reply must appear in the database. For RNN-based sequence generators, however, it is time-consuming to train with such a large dataset; RNN's performance may also saturate when we have several million samples.",
"To construct a database for information retrieval, we collected human-human utterances from massive online forums, microblogs, and question-answering communities, such as Sina Weibo, Baidu Zhidao, and Baidu Tieba. We filtered out short and meaningless replies like “...” and “Errr.” In total, the database contains 7 million query-reply pairs for retrieval.",
"For the generation part, we constructed another dataset from various resources in public websites comprising 1,606,741 query-reply pairs. For each query $q$ , we searched for a candidate reply $r^*$ by the retrieval component and obtained a tuple $\\langle q, r^*, r\\rangle $ . As a friendly reminder, $q$ and $r^*$ are the input of biseq2seq, whose output should approximate $r$ . We randomly selected 100k triples for validation and another 6,741 for testing. The train-val-test split remains the same for all competing models.",
"The biseq2seq then degrades to an utterance autoencoder nbciteautoencoder. Also, the validation and test sets are disjoint with the training set and the database for retrieval, which complies with the convention of machine learning.",
"To train our neural models, we followed nbciteacl for hyperparameter settings. All embeddings were set to 620-dimensional and hidden states 1000d. We applied AdaDelta with a mini-batch size of 80 and other default hyperparameters for optimization. Chinese word segmentation was performed on all utterances. We kept a same set of 100k words (Chinese terms) for two encoders, but 30k for the decoder due to efficiency concerns. The three neural networks do not share parameters (neither connection weights nor embeddings).",
"We did not tune the above hyperparameters, which were set empirically. The validation set was used for early stop based on the perplexity measure."
],
[
"We compare our model ensemble with each individual component and provide a thorough ablation test. Listed below are the competing methods in our experiments.",
"Retrieval. A state-of-the-practice dialog system, which is a component of our model ensemble; it is also a strong baseline because of extensive human engineering.",
"seq2seq. An encoder-encoder framework nbciteseq2seq, first introduced in nbciteacl for dialog systems.",
"biseq2seq. Another component in our approach, adapted from nbcitemultiseq2seq, which is essentially a seq2seq model extended with a retrieved reply.",
"Rerank(Retrieval,seq2seq). Post-reranking between a retrieved candidate and one generated by seq2seq.",
"Rerank(Retrieval,biseq2seq). This is the full proposed model ensemble.",
"All baselines were trained and tuned in a same way as our full model, when applicable, so that the comparison is fair."
],
[
"We evaluated our approach in terms of both subjective and objective evaluation.",
"Human evaluation, albeit time- and labor-consuming, conforms to the ultimate goal of open-domain conversation systems. We asked three educated volunteers to annotate the results using a common protocol known as pointwise annotation nbciteacl,ijcai,seq2BF. In other words, annotators were asked to label either “0” (bad), “1” (borderline), or “2” (good) to a query-reply pair. The subjective evaluation was performed in a strict random and blind fashion to rule out human bias.",
"We adopted BLEU-1, BLEU-2, BLEU-3 and BLEU-4 as automatic evaluation. While nbcitehowNOTto further aggressively argues that no existing automatic metric is appropriate for open-domain dialogs, they show a slight positive correlation between BLEU-2 and human evaluation in non-technical Twitter domain, which is similar to our scenario. We nonetheless include BLEU scores as expedient objective evaluation, serving as supporting evidence. BLEUs are also used in nbcitenaacl for model comparison and in nbciteseq2BF for model selection.",
"Notice that, automatic metrics were computed on the entire test set, whereas subjective evaluation was based on 79 randomly chosen test samples due to the limitation of human resources available.",
"We present our main results in Table 2 . As shown, the retrieval system, which our model ensemble is based on, achieves better performance than RNN-based sequence generation. The result is not consistent with nbciteacl, where their RNNs are slightly better than retrieval-based methods. After closely examining their paper, we find that their database is multiple times smaller than ours, which may, along with different features and retrieval methods, explain the phenomenon. This also verifies that the retrieval-based dialog system in our experiment is a strong baseline to compare with.",
"Combining the retrieval system and the RNN generator by bi-sequence input and post-reranking, we achieve the highest performance in terms of both human evaluation and BLEU scores. Concretely, our model ensemble outperforms the state-of-the-practice retrieval system by $ +13.6\\%$ averaged human scores, which we believe is a large margin."
],
[
"Having verified that our model ensemble achieves better performance than all baselines, we are further curious how each gadget contributes to our final system. Specially, we focus on the following research questions.",
"RQ1: What is the performance of biseq2seq (the 1 step in Figure 1 ) in comparison with traditional seq2seq?",
"From the BLEU scores in Table 2 , we see biseq2seq significantly outperforms conventional seq2seq, showing that, if enriched with a retrieved human utterance as a candidate, the encoder-decoder framework can generate much more human-like utterances.",
"We then introduce in Table 3 another measure, the entropy of a sentence, defined as $-\\frac{1}{|R|}\\sum _{w\\in R}\\log p(w)$ ",
"where $R$ refers to all replies. Entropy is used in nbcitevariationalDialog and nbciteseq2BF to measure the serendipity of generated utterances. The results in Table 3 confirm that biseq2seq indeed integrates information from the retrieved candidate, so that it alleviates the “low-substance” problem of RNNs and can generate utterances more meaningful than traditional seq2seq. And the statistic result also displays that biseq2seq generates longer sentences than seq2seq approach.",
"RQ2: How do the retrieval- and generation-based systems contribute to post-reranking (the 2 step in Figure 1 )?",
"We plot in Figure 3 the percentage by which the post-raranker chooses a retrieved candidate or a generated one. In the retrieval-and-seq2seq ensemble (Figure 3 a), 54.65% retrieved results and 45.35% generated ones are selected. In retrieval-and-biseq2seq ensemble, the percentage becomes 44.77% vs. 55.23%. The trend further indicates that biseq2seq is better than seq2seq (at least) from the reranker's point of view. More importantly, as the percentages are close to 50%, both the retrieval system and the generation system contribute a significant portion to our final ensemble.",
"RQ3: Do we obtain further gain by combining the two gadgets 1 and 2 in Figure 1 ?",
"We would also like to verify if the combination of biseq2seq and post-reranking mechanisms will yield further gain in our ensemble. To test this, we compare the full model Rerank(Retrieval,biseq2seq) with an ensemble that uses traditional seq2seq, i.e., Rerank(Retrieval,seq2seq). As indicated in Table 2 , even with the post-reranking mechanism, the ensemble with underlying biseq2seq still outperforms the one with seq2seq. Likewise, Rerank(Retrieval,biseq2seq) outperforms both Retrieval and biseq2seq. These results are consistent in terms of all metrics except a BLEU-4 score.",
"Through the above ablation tests, we conclude that both gadgets (biseq2seq and post-reranking) play a role in our ensemble when we combine the retrieval and the generative systems."
],
[
"Table 4 presents two examples of our ensemble and its “base” models. We see that biseq2seq is indeed influenced by the retrieved candidates. As opposed to traditional seq2seq, several content words in the retrieved replies (e.g., crush) also appear in biseq2seq's output, making the utterances more meaningful. The post-reranker also chooses a more appropriate candidate as the reply."
],
[
"In early years, researchers mainly focus on domain-specific dialog systems, e.g., train routing nbcitetrain, movie information nbcitemovie, and human tutoring nbcitetutor. Typically, a pre-constructed ontology defines a finite set of slots and values, for example, cuisine, location, and price range in a food service dialog system; during human-computer interaction, a state tracker fills plausible values to each slot from user input, and recommend the restaurant that best meets the user's requirement nbcitewebstyle,ACL15statetracking,pseudoN2N.",
"In the open domain, however, such slot-filling approaches would probably fail because of the diversity of topics and natural language utterances. nbciteretrieval1 applies information retrieval techniques to search for related queries and replies. nbciteretrieval2 and nbcitesigir use both shallow hand-crafted features and deep neural networks for matching. nbciteijcai proposes a random walk-style algorithm to rank candidate replies. In addition, their model can introduce additional content (related entities in the dialog context) by searching a knowledge base when a stalemate occurs during human-computer conversation.",
"Generative dialog systems have recently attracted increasing attention in the NLP community. nbcitesmt formulates query-reply transformation as a phrase-based machine translation. Since the last year, the renewed prosperity of neural networks witnesses an emerging trend in using RNN for dialog systems nbcitenn0,BoWdialog,acl,aaai. However, a known issue with RNN is that it prefers to generate short, meaningless utterances. nbcitenaacl proposes a mutual information objective in contrast to the conventional maximum likelihood criterion. nbciteseq2BF and nbcitetopic introduce additional content (either the most mutually informative word or topic information) to the reply generator. nbcitevariationalDialog applies a variational encoder to capture query information as a distribution, from which a random vector is sampled for reply generation.",
"To the best of our knowledge, we are the first to combine retrieval-based and generation-based dialog systems. The use of biseq2seq and post-reranking is also a new insight of this paper."
],
[
"In this paper, we propose a novel ensemble of retrieval-based and generation-based open-domain dialog systems. The retrieval part searches a best-match candidate reply, which is, along with the original query, fed to an RNN-based biseq2seq reply generator. The generated utterance is fed back as a new candidate to the retrieval system for post-reranking. Experimental results show that our ensemble outperforms its underlying retrieval system and generation system by a large margin. In addition, the ablation test demonstrates both the biseq2seq and post-reranking mechanisms play an important role in the ensemble.",
"Our research also points out several promising directions for future work, for example, developing new mechanisms of combining retrieval and generative dialog systems, as well as incorporating other data-driven approaches to human-computer conversation."
]
],
"section_name": [
"Introduction",
"Overview",
"Retrieval-Based Dialog System",
"The biseq2seq Utterance Generator",
"Post-Reranking",
"Training",
"Evaluation",
"Experimental Setup",
"Competing Methods",
"Overall Performance",
"Analysis and Discussion",
"Case Study",
"Related Work",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"2632e9163cd53abf0a0dd4d18ca60bec2afe0c42",
"51fe22512641ba41bbb93fc010389f37d4697c88"
],
"answer": [
{
"evidence": [
"Human evaluation, albeit time- and labor-consuming, conforms to the ultimate goal of open-domain conversation systems. We asked three educated volunteers to annotate the results using a common protocol known as pointwise annotation nbciteacl,ijcai,seq2BF. In other words, annotators were asked to label either “0” (bad), “1” (borderline), or “2” (good) to a query-reply pair. The subjective evaluation was performed in a strict random and blind fashion to rule out human bias."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Human evaluation, albeit time- and labor-consuming, conforms to the ultimate goal of open-domain conversation systems. We asked three educated volunteers to annotate the results using a common protocol known as pointwise annotation nbciteacl,ijcai,seq2BF. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Human evaluation, albeit time- and labor-consuming, conforms to the ultimate goal of open-domain conversation systems. We asked three educated volunteers to annotate the results using a common protocol known as pointwise annotation nbciteacl,ijcai,seq2BF. In other words, annotators were asked to label either “0” (bad), “1” (borderline), or “2” (good) to a query-reply pair. The subjective evaluation was performed in a strict random and blind fashion to rule out human bias."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We asked three educated volunteers to annotate the results using a common protocol known as pointwise annotation nbciteacl,ijcai,seq2BF."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"156c3a308ff0178111adad95bc972618ce9eeb03",
"204f4a126ad0a2c74566a58f1d74f60f11547d66"
],
"answer": [
{
"evidence": [
"To construct a database for information retrieval, we collected human-human utterances from massive online forums, microblogs, and question-answering communities, such as Sina Weibo, Baidu Zhidao, and Baidu Tieba. We filtered out short and meaningless replies like “...” and “Errr.” In total, the database contains 7 million query-reply pairs for retrieval.",
"For the generation part, we constructed another dataset from various resources in public websites comprising 1,606,741 query-reply pairs. For each query $q$ , we searched for a candidate reply $r^*$ by the retrieval component and obtained a tuple $\\langle q, r^*, r\\rangle $ . As a friendly reminder, $q$ and $r^*$ are the input of biseq2seq, whose output should approximate $r$ . We randomly selected 100k triples for validation and another 6,741 for testing. The train-val-test split remains the same for all competing models."
],
"extractive_spans": [],
"free_form_answer": "They create their own datasets from online text.",
"highlighted_evidence": [
"To construct a database for information retrieval, we collected human-human utterances from massive online forums, microblogs, and question-answering communities, such as Sina Weibo, Baidu Zhidao, and Baidu Tieba.",
"For the generation part, we constructed another dataset from various resources in public websites comprising 1,606,741 query-reply pairs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To construct a database for information retrieval, we collected human-human utterances from massive online forums, microblogs, and question-answering communities, such as Sina Weibo, Baidu Zhidao, and Baidu Tieba. We filtered out short and meaningless replies like “...” and “Errr.” In total, the database contains 7 million query-reply pairs for retrieval.",
"For the generation part, we constructed another dataset from various resources in public websites comprising 1,606,741 query-reply pairs. For each query $q$ , we searched for a candidate reply $r^*$ by the retrieval component and obtained a tuple $\\langle q, r^*, r\\rangle $ . As a friendly reminder, $q$ and $r^*$ are the input of biseq2seq, whose output should approximate $r$ . We randomly selected 100k triples for validation and another 6,741 for testing. The train-val-test split remains the same for all competing models."
],
"extractive_spans": [
"To construct a database for information retrieval, we collected human-human utterances from massive online forums, microblogs, and question-answering communities",
"the database contains 7 million query-reply pairs for retrieval",
"For the generation part, we constructed another dataset from various resources in public websites comprising 1,606,741 query-reply pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"To construct a database for information retrieval, we collected human-human utterances from massive online forums, microblogs, and question-answering communities, such as Sina Weibo, Baidu Zhidao, and Baidu Tieba. We filtered out short and meaningless replies like “...” and “Errr.” In total, the database contains 7 million query-reply pairs for retrieval.\n\nFor the generation part, we constructed another dataset from various resources in public websites comprising 1,606,741 query-reply pairs. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"85d2c0c91a296ba570e59c753d88bb72231274d9",
"9d78dfb59788e9e4a7fc7cb0c5388efac1dd2a1d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Were human evaluations conducted?",
"What datasets are used?",
"How does inference time compare to other methods?"
],
"question_id": [
"97055ab0227ed6ac7a8eba558b94f01867bb9562",
"3e23cc3c5e4d5cec51d158130d6aeae120e94fc8",
"bfcbb47f3c54ee1a459183e04e4c5a41ac9ae83b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"dialog",
"dialog",
"dialog"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The overall architecture of our model ensemble. We combine retrieval and generative dialog systems by 1© enhancing the generator with the retrieved candidate and by 2© post-reranking of both retrieved and generated candidates.",
"Figure 2: The biseq2seq model, which takes as input a query q and a retrieved candidate reply r∗; it outputs a new reply r+.",
"Table 1: Statistics of our datasets.",
"Table 2: Results of our ensemble and competing methods in terms of average human scores and BLEUs. Inter-annotator agreement for human annotation: Fleiss’ κ = 0.2824 [3], std = 0.4031, indicating moderate agreement.",
"Table 3: Entropy and length of generated replies. We also include groundtruth for reference. A larger entropy value indicates that the replies are less common, and probably, more meaningful.",
"Figure 3: The percentage by which our post-reranker chooses a retrieved reply or a generated reply. (a) Ensemble of Retrieval and seq2seq; (b) Ensemble of Retrieval and biseq2seq.",
"Table 4: Examples of retrieved replies and generated ones. An arrow “←” indicates the one selected during post-reranking. Also included are replies generated by conventional seq2seq. Notice that it is not part of our model and thus not considered for reranking."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"8-Figure3-1.png",
"8-Table4-1.png"
]
} | [
"What datasets are used?"
] | [
[
"1610.07149-Experimental Setup-2",
"1610.07149-Experimental Setup-1"
]
] | [
"They create their own datasets from online text."
] | 339 |
2003.10564 | Improving Yor\`ub\'a Diacritic Restoration | Yor\`ub\'a is a widely spoken West African language with a writing system rich in orthographic and tonal diacritics. They provide morphological information, are crucial for lexical disambiguation, pronunciation and are vital for any computational Speech or Natural Language Processing tasks. However diacritic marks are commonly excluded from electronic texts due to limited device and application support as well as general education on proper usage. We report on recent efforts at dataset cultivation. By aggregating and improving disparate texts from the web and various personal libraries, we were able to significantly grow our clean Yor\`ub\'a dataset from a majority Bibilical text corpora with three sources to millions of tokens from over a dozen sources. We evaluate updated diacritic restoration models on a new, general purpose, public-domain Yor\`ub\'a evaluation dataset of modern journalistic news text, selected to be multi-purpose and reflecting contemporary usage. All pre-trained models, datasets and source-code have been released as an open-source project to advance efforts on Yor\`ub\'a language technology. | {
"paragraphs": [
[
"Yorùbá is a tonal language spoken by more than 40 Million people in the countries of Nigeria, Benin and Togo in West Africa. The phonology is comprised of eighteen consonants, seven oral vowel and five nasal vowel phonemes with three kinds of tones realized on all vowels and syllabic nasal consonants BIBREF0. Yorùbá orthography makes notable use of tonal diacritics, known as amí ohùn, to designate tonal patterns, and orthographic diacritics like underdots for various language sounds BIBREF1, BIBREF2.",
"Diacritics provide morphological information, are crucial for lexical disambiguation and pronunciation, and are vital for any computational Speech or Natural Language Processing (NLP) task. To build a robust ecosystem of Yorùbá-first language technologies, Yorùbá text must be correctly represented in computing environments. The ultimate objective of automatic diacritic restoration (ADR) systems is to facilitate text entry and text correction that encourages the correct orthography and promotes quotidian usage of the language in electronic media."
],
[
"The main challenge in non-diacritized text is that it is very ambiguous BIBREF3, BIBREF4, BIBREF1, BIBREF5. ADR attempts to decode the ambiguity present in undiacritized text. Adegbola et al. assert that for ADR the “prevailing error factor is the number of valid alternative arrangements of the diacritical marks that can be applied to the vowels and syllabic nasals within the words\" BIBREF1."
],
[
"To make the first open-sourced ADR models available to a wider audience, we tested extensively on colloquial and conversational text. These soft-attention seq2seq models BIBREF3, trained on the first three sources in Table TABREF5, suffered from domain-mismatch generalization errors and appeared particularly weak when presented with contractions, loan words or variants of common phrases. Because they were trained on majority Biblical text, we attributed these errors to low-diversity of sources and an insufficient number of training examples. To remedy this problem, we aggregated text from a variety of online public-domain sources as well as actual books. After scanning physical books from personal libraries, we successfully employed commercial Optical Character Recognition (OCR) software to concurrently use English, Romanian and Vietnamese characters, forming an approximative superset of the Yorùbá character set. Text with inconsistent quality was put into a special queue for subsequent human supervision and manual correction. The post-OCR correction of Háà Ènìyàn, a work of fiction of some 20,038 words, took a single expert two weeks of part-time work by to review and correct. Overall, the new data sources comprised varied text from conversational, various literary and religious sources as well as news magazines, a book of proverbs and a Human Rights declaration."
],
[
"Data preprocessing, parallel text preparation and training hyper-parameters are the same as in BIBREF3. Experiments included evaluations of the effect of the various texts, notably for JW300, which is a disproportionately large contributor to the dataset. We also evaluated models trained with pre-trained FastText embeddings to understand the boost in performance possible with word embeddings BIBREF6, BIBREF7. Our training hardware configuration was an AWS EC2 p3.2xlarge instance with OpenNMT-py BIBREF8."
],
[
"To make ADR productive for users, our research experiments needed to be guided by a test set based around modern, colloquial and not exclusively literary text. After much review, we selected Global Voices, a corpus of journalistic news text from a multilingual community of journalists, translators, bloggers, academics and human rights activists BIBREF9."
],
[
"We evaluated the ADR models by computing a single-reference BLEU score using the Moses multi-bleu.perl scoring script, the predicted perplexity of the model's own predictions and the Word Error Rate (WER). All models with additional data improved over the 3-corpus soft-attention baseline, with JW300 providing a {33%, 11%} boost in BLEU and absolute WER respectively. Error analyses revealed that the Transformer was robust to receiving digits, rare or code-switched words as input and degraded ADR performance gracefully. In many cases, this meant the model predicted the undiacritized word form or a related word from the context, but continued to correctly predict subsequent words in the sequence. The FastText embedding provided a small boost in performance for the Transformer, but was mixed across metrics for the soft-attention models."
],
[
"Promising next steps include further automation of our human-in-the-middle data-cleaning tools, further research on contextualized word embeddings for Yorùbá and serving or deploying the improved ADR models in user-facing applications and devices."
]
],
"section_name": [
"Introduction",
"Introduction ::: Ambiguity in non-diacritized text",
"Introduction ::: Improving generalization performance",
"Methodology ::: Experimental setup",
"Methodology ::: A new, modern multi-purpose evaluation dataset",
"Results",
"Conclusions and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"15a9dca52eae9fc0843b42190ccfe9f3054176b8",
"dc21bea25d4e5acf815bbc3e1afd1fd847bffed2"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Data sources, prevalence and category of text"
],
"extractive_spans": [],
"free_form_answer": "online public-domain sources, private sources and actual books",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Data sources, prevalence and category of text"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Data sources, prevalence and category of text"
],
"extractive_spans": [],
"free_form_answer": "Various web resources and couple of private sources as listed in the table.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Data sources, prevalence and category of text"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"What sources did they get the data from?"
],
"question_id": [
"25f699c7a33e77bd552782fb3886b9df9d02abb2"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1: Diacritic characters with their non-diacritic forms",
"Table 2: Data sources, prevalence and category of text",
"Table 3: BLEU, predicted perplexity & WER on the Global Voices testset",
"Table 4: The best performing Transformer model trained with the FastText embedding was used to generate predictions. The Baseline model is the 3-corpus soft-attention model. ADR errors are in red, robust predictions of rare, loan words or digits in green."
],
"file": [
"1-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png"
]
} | [
"What sources did they get the data from?"
] | [
[
"2003.10564-2-Table2-1.png"
]
] | [
"Various web resources and couple of private sources as listed in the table."
] | 341 |
1902.10246 | Fixed-Size Ordinally Forgetting Encoding Based Word Sense Disambiguation | In this paper, we present our method of using fixed-size ordinally forgetting encoding (FOFE) to solve the word sense disambiguation (WSD) problem. FOFE enables us to encode variable-length sequence of words into a theoretically unique fixed-size representation that can be fed into a feed forward neural network (FFNN), while keeping the positional information between words. In our method, a FOFE-based FFNN is used to train a pseudo language model over unlabelled corpus, then the pre-trained language model is capable of abstracting the surrounding context of polyseme instances in labelled corpus into context embeddings. Next, we take advantage of these context embeddings towards WSD classification. We conducted experiments on several WSD data sets, which demonstrates that our proposed method can achieve comparable performance to that of the state-of-the-art approach at the expense of much lower computational cost. | {
"paragraphs": [
[
"Words with multiple senses commonly exist in many languages. For example, the word bank can either mean a “financial establishment” or “the land alongside or sloping down to a river or lake”, based on different contexts. Such a word is called a “polyseme”. The task to identify the meaning of a polyseme in its surrounding context is called word sense disambiguation (WSD). Word sense disambiguation is a long-standing problem in natural language processing (NLP), and has broad applications in other NLP problems such as machine translation BIBREF0 . Lexical sample task and all-word task are the two main branches of WSD problem. The former focuses on only a pre-selected set of polysemes whereas the later intends to disambiguate every polyseme in the entire text. Numerous works have been devoted in WSD task, including supervised, unsupervised, semi-supervised and knowledge based learning BIBREF1 . Our work focuses on using supervised learning to solve all-word WSD problem.",
"Most supervised approaches focus on extracting features from words in the context. Early approaches mostly depend on hand-crafted features. For example, IMS by BIBREF2 uses POS tags, surrounding words and collections of local words as features. These approaches are later improved by combining with word embedding features BIBREF0 , which better represents the words' semantic information in a real-value space. However, these methods neglect the valuable positional information between the words in the sequence BIBREF3 . The bi-directional Long-Short-Term-Memory (LSTM) approach by BIBREF3 provides one way to leverage the order of words. Recently, BIBREF4 improved the performance by pre-training a LSTM language model with a large unlabelled corpus, and using this model to generate sense vectors for further WSD predictions. However, LSTM significantly increases the computational complexity during the training process.",
"The development of the so called “fixed-size ordinally forgetting encoding” (FOFE) has enabled us to consider more efficient method. As firstly proposed in BIBREF5 , FOFE provides a way to encode the entire sequence of words of variable length into an almost unique fixed-size representation, while also retain the positional information for words in the sequence. FOFE has been applied to several NLP problems in the past, such as language model BIBREF5 , named entity recognition BIBREF6 , and word embedding BIBREF7 . The promising results demonstrated by the FOFE approach in these areas inspired us to apply FOFE in solving the WSD problem. In this paper, we will first describe how FOFE is used to encode sequence of any length into a fixed-size representation. Next, we elaborate on how a pseudo language model is trained with the FOFE encoding from unlabelled data for the purpose of context abstraction, and how a classifier for each polyseme is built from context abstractions of its labelled training data. Lastly, we provide the experiment results of our method on several WSD data sets to justify the equivalent performance as the state-of-the-art approach."
],
[
"The fact that human languages consist of variable-length sequence of words requires NLP models to be able to consume variable-length data. RNN/LSTM addresses this issue by recurrent connections, but such recurrence consequently increases the computational complexity. On the contrary, feed forward neural network (FFNN) has been widely adopted in many artificial intelligence problems due to its powerful modelling ability and fast computation, but is also limited by its requirement of fixed-size input. FOFE aims at encoding variable-length sequence of words into a fixed-size representation, which subsequently can be fed into an FFNN.",
"Given vocabulary INLINEFORM0 of size INLINEFORM1 , each word can be represented by a one-hot vector. FOFE can encode a sequence of words of any length using linear combination, with a forget factor to reflect the positional information. For a sequence of words INLINEFORM2 from V, let INLINEFORM3 denote the one-hot representation for the INLINEFORM4 word, then the FOFE code of S can be recursively obtained using following equation (set INLINEFORM5 ): INLINEFORM6 ",
"where INLINEFORM0 is a constant between 0 and 1, called forgetting factor. For example, assuming A, B, C are three words with one-hot vectors INLINEFORM1 , INLINEFORM2 , INLINEFORM3 respectively. The FOFE encoding from left to right for ABC is [ INLINEFORM4 , INLINEFORM5 ,1] and for ABCBC is [ INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ]. It becomes evident that the FOFE code is in fixed size, which is equal to the size of the one-hot vector, regardless of the length of the sequence INLINEFORM9 .",
"The FOFE encoding has the property that the original sequence can be unequivocally recovered from the FOFE encoding. According to BIBREF5 , the uniqueness for the FOFE encoding of a sequence is confirmed by the following two theorems:",
"Theorem 1 If the forgetting factor INLINEFORM0 satisfies INLINEFORM1 , FOFE is unique for any sequence of finite length INLINEFORM2 and any countable vocabulary INLINEFORM3 .",
"Theorem 2 If the forgetting factor INLINEFORM0 satisfies INLINEFORM1 , FOFE is almost unique for any finite value of INLINEFORM2 and vocabulary INLINEFORM3 , except only a finite set of countable choices of INLINEFORM4 .",
"Even for situations described by Theorem SECREF2 where uniqueness is not strictly guaranteed, the probability for collision is extremely low in practice. Therefore, FOFE can be safely considered as an encoding mechanism that converts variable-length sequence into a fixed-size representation theoretically without any loss of information."
],
[
"The linguistic distribution hypothesis states that words that occur in close contexts should have similar meaning BIBREF8 . It implies that the particular sense of a polyseme is highly related to its surrounding context. Moreover, human decides the sense of a polyseme by firstly understanding its occurring context. Likewise, our proposed model has two stages, as shown in Figure FIGREF3 : training a FOFE-based pseudo language model that abstracts context as embeddings, and performing WSD classification over context embeddings."
],
[
"A language model is trained with large unlabelled corpus by BIBREF4 in order to overcome the shortage of WSD training data. A language model represents the probability distribution of a given sequence of words, and it is commonly used in predicting the subsequent word given preceding sequence. BIBREF5 proposed a FOFE-based neural network language model by feeding FOFE code of preceding sequence into FFNN. WSD is different from language model in terms of that the sense prediction of a target word depends on its surrounding sequence rather than only preceding sequence. Hence, we build a pseudo language model that uses both preceding and succeeding sequence to accommodate the purpose of WSD tasks.",
"The preceding and succeeding sequences are separately converted into FOFE codes. As shown in Figure FIGREF3 , the words preceding the target word are encoded from left to right as the left FOFE code, and the words succeeding the target word are encoded from right to left as the right FOFE code. The forgetting factor that underlies the encoding direction reflects the reducing relevance of a word due to the increasing distance relative to the target word. Furthermore, the FOFE is scalable to higher orders by merging tailing partial FOFE codes. For example, a second order FOFE of sequence INLINEFORM0 can be obtained as INLINEFORM1 . Lastly, the left and right FOFE codes are concatenated into one single fixed-size vector, which can be fed into an FFNN as an input.",
"FFNN is constructed in fully-connected layers. Each layer receives values from previous layer as input, and produces values through a function over weighted input values as its output. FFNN increasingly abstracts the features of the data through the layers. As the pseudo language model is trained to predict the target word, the output layer is irrelevant to WSD task and hence can be discarded. However, the remaining layers still have learned the ability to generalize features from word to context during the training process. The values of the held-out layer (the second last layer) are extracted as context embedding, which provides a nice numerical abstraction of the surrounding context of a target word."
],
[
"Words with the same sense mostly appear in similar contexts, hence the context embeddings of their contexts are supposed to be close in the embedding space. As the FOFE-based pseudo language model is capable of abstracting surrounding context for any target word as context embeddings, applying the language model on instances in annotated corpus produces context embeddings for senses.",
"A classifier can be built for each polyseme over the context embeddings of all its occurring contexts in the training corpus. When predict the sense of a polyseme, we similarly extract the context embedding from the context surrounding the predicting polyseme, and send it to the polyseme's classifier to decide the sense. If a classifier cannot be built for the predicting polyseme due to the lack of training instance, the first sense from the dictionary is used instead.",
"For example, word INLINEFORM0 has two senses INLINEFORM1 for INLINEFORM2 occurring in the training corpus, and each sense has INLINEFORM3 instances. The pseudo language model converts all the instances into context embeddings INLINEFORM4 for INLINEFORM5 , and these embeddings are used as training data to build a classifier for INLINEFORM6 . The classifier can then be used to predict the sense of an instance of INLINEFORM7 by taking the predicting context embedding INLINEFORM8 .",
"The context embeddings should fit most traditional classifiers, and the choice of classifier is empirical. BIBREF4 takes the average over context embeddings to construct sense embeddings INLINEFORM0 , and selects the sense whose sense embedding is closest to the predicting context embedding measured by cosine similarity. In practice, we found k-nearest neighbor (kNN) algorithm, which predicts the sense to be the majority of k nearest neighbors, produces better performance on the context embeddings produced by our FOFE-based pseudo language model."
],
[
"To evaluate the performance of our proposed model, we implemented our model using Tensorflow BIBREF11 and conducted experiments on standard SemEval data that are labelled by senses from WordNet 3.0 BIBREF12 . We built the classifier using SemCor BIBREF13 as training corpus, and evaluated on Senseval2 BIBREF14 , and SemEval-2013 Task 12 BIBREF15 ."
],
[
"When training our FOFE-based pseudo language model, we use Google1B BIBREF10 corpus as the training data, which consists of approximately 0.8 billion words. The 100,000 most frequent words in the corpus are chosen as the vocabulary. The dimension of word embedding is chosen to be 512. During the experiment, the best results are produced by the 3rd order pseudo language model. The concatenation of the left and right 3rd order FOFE codes leads to a dimension of 512 * 3 * 2 = 3072 for the FFNN's input layer. Then we append three hidden layers of dimension 4096. Additionally, we choose a constant forgetting factor INLINEFORM0 for the FOFE encoding and INLINEFORM1 for our k-nearest neighbor classifier."
],
[
"Table TABREF6 presents the micro F1 scores from different models. Note that we use a corpus with 0.8 billion words and vocabulary of 100,000 words when training the language model, comparing with BIBREF4 using 100 billion words and vocabulary of 1,000,000 words. The context abstraction using the language model is the most crucial step. The sizes of the training corpus and vocabulary significantly affect the performance of this process, and consequently the final WSD results. However, BIBREF4 did not publish the 100 billion words corpus used for training their LSTM language model.",
"Recently, BIBREF9 reimplemented the LSTM-based WSD classifier. The authors trained the language model with a smaller corpus Gigaword BIBREF16 of 2 billion words and vocabulary of 1 million words, and reported the performance. Their published code also enabled us to train an LSTM model with the same data used in training our FOFE model, and compare the performances at the equivalent conditions.",
"Additionally, the bottleneck of the LSTM approach is the training speed. The training process of the LSTM model by BIBREF9 took approximately 4.5 months even after applying optimization of trimming sentences, while the training process of our FOFE-based model took around 3 days to produce the claimed results."
],
[
"In this paper, we propose a new method for word sense disambiguation problem, which adopts the fixed-size ordinally forgetting encoding (FOFE) to convert variable-length context into almost unique fixed-size representation. A feed forward neural network pseudo language model is trained with FOFE codes of large unlabelled corpus, and used for abstracting the context embeddings of annotated instance to build a k-nearest neighbor classifier for every polyseme. Compared to the high computational cost induced by LSTM model, the fixed-size encoding by FOFE enables the usage of a simple feed forward neural network, which is not only much more efficient but also equivalently promising in numerical performance."
]
],
"section_name": [
"Introduction",
"Fixed-size Ordinally Forgetting Encoding",
"Methodology",
"FOFE-based Pseudo Language Model",
"WSD Classification",
"Experiment",
"Experiment settings",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"15aa4f02de69ab96efc8640b06bebe8f5fdac40d",
"4d6e8ce4649adc90c2711d13050849f7b76bffb6"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"5b5c36df41eef3ae73d41f72dbb4b15207f60976",
"7153d5850924bc7c51af7a0b45139d521915cd37"
],
"answer": [
{
"evidence": [
"Additionally, the bottleneck of the LSTM approach is the training speed. The training process of the LSTM model by BIBREF9 took approximately 4.5 months even after applying optimization of trimming sentences, while the training process of our FOFE-based model took around 3 days to produce the claimed results."
],
"extractive_spans": [
"BIBREF9 took approximately 4.5 months even after applying optimization of trimming sentences, while the training process of our FOFE-based model took around 3 days"
],
"free_form_answer": "",
"highlighted_evidence": [
"The training process of the LSTM model by BIBREF9 took approximately 4.5 months even after applying optimization of trimming sentences, while the training process of our FOFE-based model took around 3 days to produce the claimed results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Additionally, the bottleneck of the LSTM approach is the training speed. The training process of the LSTM model by BIBREF9 took approximately 4.5 months even after applying optimization of trimming sentences, while the training process of our FOFE-based model took around 3 days to produce the claimed results."
],
"extractive_spans": [],
"free_form_answer": "By 45 times.",
"highlighted_evidence": [
"The training process of the LSTM model by BIBREF9 took approximately 4.5 months even after applying optimization of trimming sentences, while the training process of our FOFE-based model took around 3 days to produce the claimed results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"c3b7f6b9327717bc4e4257e6cc46f3fa0a604725",
"c3b99b0ac3dfa5415cdb08f27b545e0f5b9bc4cd"
],
"answer": [
{
"evidence": [
"Most supervised approaches focus on extracting features from words in the context. Early approaches mostly depend on hand-crafted features. For example, IMS by BIBREF2 uses POS tags, surrounding words and collections of local words as features. These approaches are later improved by combining with word embedding features BIBREF0 , which better represents the words' semantic information in a real-value space. However, these methods neglect the valuable positional information between the words in the sequence BIBREF3 . The bi-directional Long-Short-Term-Memory (LSTM) approach by BIBREF3 provides one way to leverage the order of words. Recently, BIBREF4 improved the performance by pre-training a LSTM language model with a large unlabelled corpus, and using this model to generate sense vectors for further WSD predictions. However, LSTM significantly increases the computational complexity during the training process.",
"Table TABREF6 presents the micro F1 scores from different models. Note that we use a corpus with 0.8 billion words and vocabulary of 100,000 words when training the language model, comparing with BIBREF4 using 100 billion words and vocabulary of 1,000,000 words. The context abstraction using the language model is the most crucial step. The sizes of the training corpus and vocabulary significantly affect the performance of this process, and consequently the final WSD results. However, BIBREF4 did not publish the 100 billion words corpus used for training their LSTM language model."
],
"extractive_spans": [
"BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"Recently, BIBREF4 improved the performance by pre-training a LSTM language model with a large unlabelled corpus, and using this model to generate sense vectors for further WSD predictions.",
"Note that we use a corpus with 0.8 billion words and vocabulary of 100,000 words when training the language model, comparing with BIBREF4 using 100 billion words and vocabulary of 1,000,000 words. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: The corpus size, vocabulary size and training time when pre-training the language models, and F1 scores of different models on multiple WSD tasks using SemCor as training data. The asterisk (∗) indicates the results are from (Iacobacci et al., 2016). Our training (†) uses code published by (Le et al., 2017) with Google1B (Chelba et al., 2014) as training data."
],
"extractive_spans": [],
"free_form_answer": "LSTM",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The corpus size, vocabulary size and training time when pre-training the language models, and F1 scores of different models on multiple WSD tasks using SemCor as training data. The asterisk (∗) indicates the results are from (Iacobacci et al., 2016). Our training (†) uses code published by (Le et al., 2017) with Google1B (Chelba et al., 2014) as training data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"3acb2ca9f2256455abd08e300e1b33947fcad757",
"679e017930c852efc550871a6dba8fa8d3607374"
],
"answer": [
{
"evidence": [
"A language model is trained with large unlabelled corpus by BIBREF4 in order to overcome the shortage of WSD training data. A language model represents the probability distribution of a given sequence of words, and it is commonly used in predicting the subsequent word given preceding sequence. BIBREF5 proposed a FOFE-based neural network language model by feeding FOFE code of preceding sequence into FFNN. WSD is different from language model in terms of that the sense prediction of a target word depends on its surrounding sequence rather than only preceding sequence. Hence, we build a pseudo language model that uses both preceding and succeeding sequence to accommodate the purpose of WSD tasks."
],
"extractive_spans": [
"different from language model in terms of that the sense prediction of a target word depends on its surrounding sequence rather than only preceding sequence"
],
"free_form_answer": "",
"highlighted_evidence": [
"A language model represents the probability distribution of a given sequence of words, and it is commonly used in predicting the subsequent word given preceding sequence. BIBREF5 proposed a FOFE-based neural network language model by feeding FOFE code of preceding sequence into FFNN. WSD is different from language model in terms of that the sense prediction of a target word depends on its surrounding sequence rather than only preceding sequence. Hence, we build a pseudo language model that uses both preceding and succeeding sequence to accommodate the purpose of WSD tasks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The linguistic distribution hypothesis states that words that occur in close contexts should have similar meaning BIBREF8 . It implies that the particular sense of a polyseme is highly related to its surrounding context. Moreover, human decides the sense of a polyseme by firstly understanding its occurring context. Likewise, our proposed model has two stages, as shown in Figure FIGREF3 : training a FOFE-based pseudo language model that abstracts context as embeddings, and performing WSD classification over context embeddings.",
"A language model is trained with large unlabelled corpus by BIBREF4 in order to overcome the shortage of WSD training data. A language model represents the probability distribution of a given sequence of words, and it is commonly used in predicting the subsequent word given preceding sequence. BIBREF5 proposed a FOFE-based neural network language model by feeding FOFE code of preceding sequence into FFNN. WSD is different from language model in terms of that the sense prediction of a target word depends on its surrounding sequence rather than only preceding sequence. Hence, we build a pseudo language model that uses both preceding and succeeding sequence to accommodate the purpose of WSD tasks."
],
"extractive_spans": [],
"free_form_answer": "Pseudo language model abstracts context as embeddings using preceding and succeeding sequences.",
"highlighted_evidence": [
"Likewise, our proposed model has two stages, as shown in Figure FIGREF3 : training a FOFE-based pseudo language model that abstracts context as embeddings, and performing WSD classification over context embeddings.",
"Hence, we build a pseudo language model that uses both preceding and succeeding sequence to accommodate the purpose of WSD tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What language is the model tested on?",
"How much lower is the computational cost of the proposed model?",
"What is the state-of-the-art model?",
"What is a pseudo language model?"
],
"question_id": [
"3e4e415e346a313f5a7c3764fe0f51c11f51b071",
"d622564b250cffbb9ebbe6636326b15ec3c622d9",
"4367617c0b8c9f33051016e8d4fbb44831c54d0f",
"2c60628d54f2492e0cbf0fb8bacd8e54117f0c18"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Context abstraction through FOFE-based pseudo language model and WSD classification over context embeddings",
"Table 1: The corpus size, vocabulary size and training time when pre-training the language models, and F1 scores of different models on multiple WSD tasks using SemCor as training data. The asterisk (∗) indicates the results are from (Iacobacci et al., 2016). Our training (†) uses code published by (Le et al., 2017) with Google1B (Chelba et al., 2014) as training data."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"How much lower is the computational cost of the proposed model?",
"What is the state-of-the-art model?",
"What is a pseudo language model?"
] | [
[
"1902.10246-Results-2"
],
[
"1902.10246-Introduction-1",
"1902.10246-4-Table1-1.png",
"1902.10246-Results-0"
],
[
"1902.10246-Methodology-0",
"1902.10246-FOFE-based Pseudo Language Model-0"
]
] | [
"By 45 times.",
"LSTM",
"Pseudo language model abstracts context as embeddings using preceding and succeeding sequences."
] | 342 |
1706.02222 | Gated Recurrent Neural Tensor Network | Recurrent Neural Networks (RNNs), which are a powerful scheme for modeling temporal and sequential data need to capture long-term dependencies on datasets and represent them in hidden layers with a powerful model to capture more information from inputs. For modeling long-term dependencies in a dataset, the gating mechanism concept can help RNNs remember and forget previous information. Representing the hidden layers of an RNN with more expressive operations (i.e., tensor products) helps it learn a more complex relationship between the current input and the previous hidden layer information. These ideas can generally improve RNN performances. In this paper, we proposed a novel RNN architecture that combine the concepts of gating mechanism and the tensor product into a single model. By combining these two concepts into a single RNN, our proposed models learn long-term dependencies by modeling with gating units and obtain more expressive and direct interaction between input and hidden layers using a tensor product on 3-dimensional array (tensor) weight parameters. We use Long Short Term Memory (LSTM) RNN and Gated Recurrent Unit (GRU) RNN and combine them with a tensor product inside their formulations. Our proposed RNNs, which are called a Long-Short Term Memory Recurrent Neural Tensor Network (LSTMRNTN) and Gated Recurrent Unit Recurrent Neural Tensor Network (GRURNTN), are made by combining the LSTM and GRU RNN models with the tensor product. We conducted experiments with our proposed models on word-level and character-level language modeling tasks and revealed that our proposed models significantly improved their performance compared to our baseline models. | {
"paragraphs": [
[
"Modeling temporal and sequential data, which is crucial in machine learning, can be applied in many areas, such as speech and natural language processing. Deep neural networks (DNNs) have garnered interest from many researchers after being successfully applied in image classification BIBREF0 and speech recognition BIBREF1 . Another type of neural network, called a recurrent neural network (RNN) is also widely used for speech recognition BIBREF2 , machine translation BIBREF3 , BIBREF4 and language modeling BIBREF5 , BIBREF6 . RNNs have achieved many state-of-the-art results. Compared to DNNs, they have extra parameters for modeling the relationships of previous or future hidden states with current input where the RNN parameters are shared for each input time-step.",
"Generally, RNNs can be separated by a simple RNN without gating units, such as the Elman RNN BIBREF7 , the Jordan RNN BIBREF8 , and such advanced RNNs with gating units as the Long-Short Term Memory (LSTM) RNN BIBREF9 and the Gated Recurrent Unit (GRU) RNN BIBREF4 . A simple RNN usually adequate to model some dataset and a task with short-term dependencies like slot filling for spoken language understanding BIBREF10 . However, for more difficult tasks like language modeling and machine translation where most predictions need longer information and a historical context from each sentence, gating units are needed to achieve good performance. With gating units for blocking and passing information from previous or future hidden layer, we can learn long-term information and recursively backpropagate the error from our prediction without suffering from vanishing or exploding gradient problems BIBREF9 . In spite of this situation, the concept of gating mechanism does not provide an RNN with a more powerful way to model the relation between the current input and previous hidden layer representations.",
"Most interactions inside RNNs between current input and previous (or future) hidden states are represented using linear projection and addition and are transformed by the nonlinear activation function. The transition is shallow because no intermediate hidden layers exist for projecting the hidden states BIBREF11 . To get a more powerful representation on the hidden layer, Pascanu et al. BIBREF11 modified RNNs with an additional nonlinear layer from input to the hidden layer transition, hidden to hidden layer transition and also hidden to output layer transition. Socher et al. BIBREF12 , BIBREF13 proposed another approach using a tensor product for calculating output vectors given two input vectors. They modified a Recursive Neural Network (RecNN) to overcome those limitations using more direct interaction between two input layers. This architecture is called a Recursive Neural Tensor Network (RecNTN), which uses a tensor product between child input vectors to represent the parent vector representation. By adding the tensor product operation to calculate their parent vector, RecNTN significantly improves the performance of sentiment analysis and reasoning on entity relations tasks compared to standard RecNN architecture. However, those models struggle to learn long-term dependencies because the do not utilize the concept of gating mechanism.",
"In this paper, we proposed a new RNN architecture that combine the gating mechanism and tensor product concepts to incorporate both advantages in a single architecture. Using the concept of such gating mechanisms as LSTMRNN and GRURNN, our proposed architecture can learn temporal and sequential data with longer dependencies between each input time-step than simple RNNs without gating units and combine the gating units with tensor products to represent the hidden layer with more powerful operation and direct interaction. Hidden states are generated by the interaction between current input and previous (or future) hidden states using a tensor product and a non-linear activation function allows more expressive model representation. We describe two different models based on LSTMRNN and GRURNN. LSTMRNTN is our proposed model for the combination between a LSTM unit with a tensor product inside its cell equation and GRURNTN is our name for a GRU unit with a tensor product inside its candidate hidden layer equation.",
"In Section \"Background\" , we provide some background information related to our research. In Section \"Proposed Architecture\" , we describe our proposed RNN architecture in detail. We evaluate our proposed RNN architecture on word-level and character-level language modeling tasks and reported the result in Section \"Experiment Settings\" . We present related works in Section \"Related Work\" . Section \"Conclusion\" summarizes our paper and provides some possible future improvements."
],
[
"A Recurrent Neural Network (RNN) is one kind of neural network architecture for modeling sequential and temporal dependencies BIBREF2 . Typically, we have input sequence $\\mathbf {x}=(x_1,...,x_{T})$ and calculate hidden vector sequence $\\mathbf {h}=(h_1,...,h_{T})$ and output vector sequence $\\mathbf {y}=(y_1,...,y_T)$ with RNNs. A standard RNN at time $t$ -th is usually formulated as: ",
"$$h_t &=& f(x_t W_{xh} + h_{t-1} W_{hh} + b_h) \\\\\ny_t &=& g(h_t W_{hy} + b_y).$$ (Eq. 2) ",
"where $W_{xh}$ represents the input layer to the hidden layer weight matrix, $W_{hh}$ represents hidden to hidden layer weight matrix, $W_{hy}$ represents the hidden to the output weight matrix, $b_h$ and $b_y$ represent bias vectors for the hidden and output layers. $f(\\cdot )$ and $g(\\cdot )$ are nonlinear activation functions such as sigmoid or tanh."
],
[
"Simple RNNs are hard to train to capture long-term dependencies from long sequential datasets because the gradient can easily explode or vanish BIBREF14 , BIBREF15 . Because the gradient (usually) vanishes after several steps, optimizing a simple RNN is more complicated than standard neural networks. To overcome the disadvantages of simple RNNs, several researches have been done. Instead of using a first-order optimization method, one approach optimized the RNN using a second-order Hessian Free optimization BIBREF16 . Another approach, which addressed the vanishing and exploding gradient problem, modified the RNN architecture with additional parameters to control the information flow from previous hidden layers using the gating mechanism concept BIBREF9 . A gated RNN is a special recurrent neural network architecture that overcomes this weakness of a simple RNN by introducing gating units. There are variants from RNN with gating units, such as Long Short Term Memory (LSTM) RNN and Gated Recurrent Unit (GRU) RNN. In the following sections, we explain both LSTMRNN and GRURNN in more detail.",
"A Long Short Term Memory (LSTM) BIBREF9 is a gated RNN with three gating layers and memory cells. The gating layers are used by the LSTM to control the existing memory by retaining the useful information and forgetting the unrelated information. Memory cells are used for storing the information across time. The LSTM hidden layer at time $t$ is defined by the following equations BIBREF17 : ",
"$$i_t &=& \\sigma (x_t W_{xi} + h_{t-1} W_{hi} + c_{t-1} W_{ci} + b_i) \\\\\nf_t &=& \\sigma (x_t W_{xf} + h_{t-1} W_{hf} + c_{t-1} W_{cf} + b_f) \\\\\nc_t &=& f_t \\odot c_{t-1} + i_t \\odot \\tanh (x_t W_{xc} + h_{t-1} W_{hc} + b_c) \\\\\no_t &=& \\sigma (x_t W_{xo} + h_{t-1} W_{ho} + c_t W_{co} + b_o) \\\\\nh_t &=& o_t \\odot \\tanh (c_t)$$ (Eq. 6) ",
"where $\\sigma (\\cdot )$ is sigmoid activation function and $i_t, f_t, o_t$ and $c_t$ are respectively the input gates, the forget gates, the output gates and the memory cells at time-step $t$ . The input gates keep the candidate memory cell values that are useful for memory cell computation, and the forget gates keep the previous memory cell values that are useful for calculating the current memory cell. The output gates filter which the memory cell values that are useful for the output or next hidden layer input.",
"A Gated Recurrent Unit (GRU) BIBREF4 is a gated RNN with similar properties to a LSTM. However, there are several differences: a GRU does not have separated memory cells BIBREF18 , and instead of three gating layers, it only has two gating layers: reset gates and update gates. The GRU hidden layer at time $t$ is defined by the following equations BIBREF4 : ",
"$$r_t &=& \\sigma (x_t W_{xr} + h_{t-1} W_{hr} + b_r)\\\\\nz_t &=& \\sigma (x_t W_{xz} + h_{t-1} W_{hz} + b_r)\\\\\n\\tilde{h_t} &=& f(x_t W_{xh} + (r_t \\odot h_{t-1}) W_{hh} + b_h)\\\\\nh_t &=& (1 - z_t) \\odot h_{t-1} + z_t \\odot \\tilde{h_t}$$ (Eq. 9) ",
"where $\\sigma (\\cdot )$ is a sigmoid activation function, $r_t, z_t$ are reset and update gates, $\\tilde{h_t}$ is the candidate hidden layer values and $h_t$ is the hidden layer values at time- $t$ . The reset gates determine which previous hidden layer value is useful for generating the current candidate hidden layer. The update gates keeps the previous hidden layer values or replaced by new candidate hidden layer values. In spite of having one fewer gating layer, the GRU can match LSTM's performance and its convergence speed convergence sometimes outperformed LSTM BIBREF18 ."
],
[
"A Recursive Neural Tensor Network (RecNTN) is a variant of a Recursive Neural Network (RecNN) for modeling input data with variable length properties and tree structure dependencies between input features BIBREF19 . To compute the input representation with RecNN, the input must be parsed into a binary tree where each leaf node represents input data. Then, the parent vectors are computed in a bottom-up fashion, following the above computed tree structure whose information can be built using external computation tools (i.e., syntactic parser) or some heuristic from our dataset observations.",
"Given Fig. 4 , $p_1$ , $p_2$ and $y$ was defined by: ",
"$$ p_1 &=& f\\left( \\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W + b \\right) \\\\\n p_2 &=& f\\left( \\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W + b \\right) \\\\\ny &=& g\\left( p_2 W_y + b_y \\right)$$ (Eq. 13) ",
"where $f(\\cdot )$ is nonlinear activation function, such as sigmoid or tanh, $g(\\cdot )$ depends on our task, $W \\in \\mathbb {R}^{2d \\times d}$ is the weight parameter for projecting child input vectors $x_1, x_2, x_3 \\in \\mathbb {R}^{d}$ into the parent vector, $W_y$ is a weight parameter for computing output vector, and $b, b_y$ are biases. If we want to train RecNN for classification tasks, $g(\\cdot )$ can be defined as a softmax function.",
"However, standard RecNNs have several limitations, where two vectors only implicitly interact with addition before applying a nonlinear activation function on them BIBREF12 and standard RecNNs are not able to model very long-term dependency on tree structures. Zhu et al. BIBREF20 proposed the gating mechanism into standard RecNN model to solve the latter problem. For the former limitation, the RecNN performance can be improved by adding more interaction between the two input vectors. Therefore, a new architecture called a Recursive Neural Tensor Network (RecNTN) tried to overcome the previous problem by adding interaction between two vectors using a tensor product, which is connected by tensor weight parameters. Each slice of the tensor weight can be used to capture the specific pattern between the left and right child vectors. For RecNTN, value $p_1$ from Eq. 13 and is defined by: ",
"$$p_1 &=& f\\left(\n\\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} + \\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W + b \\right) \\\\\np_2 &=& f\\left(\n\\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} p_1 \\\\ x_3 \\end{bmatrix} + \\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W + b \\right)$$ (Eq. 15) ",
"where $W_{tsr}^{[1:d]} \\in \\mathbb {R}^{2d \\times 2d \\times d}$ is the tensor weight to map the tensor product between two children vectors. Each slice $W_{tsr}^{[i]}$ is a matrix $\\mathbb {R}^{2d \\times 2d}$ . For more details, we visualize the calculation for $p_1$ in Fig. 5 ."
],
[
"Previously in Sections \"Experiment Settings\" and \"Recursive Neural Tensor Network\" , we discussed that the gating mechanism concept can helps RNNs learn long-term dependencies from sequential input data and that adding more powerful interaction between the input and hidden layers simultaneously with the tensor product operation in a bilinear form improves neural network performance and expressiveness. By using tensor product, we increase our model expressiveness by using second-degree polynomial interactions, compared to first-degree polynomial interactions on standard dot product followed by addition in common RNNs architecture. Therefore, in this paper we proposed a Gated Recurrent Neural Tensor Network (GRURNTN) to combine these two advantages into an RNN architecture. In this architecture, the tensor product operation is applied between the current input and previous hidden layer multiplied by the reset gates for calculating the current candidate hidden layer values. The calculation is parameterized by tensor weight. To construct a GRURNTN, we defined the formulation as: ",
"$$r_t &=& \\sigma (x_t W_{xr} + h_{t-1} W_{hr} + b_r) \\nonumber \\\\\nz_t &=& \\sigma (x_t W_{xz} + h_{t-1} W_{hz} + b_z) \\nonumber \\\\\n\\tilde{h_t} &=& f\\left( \\begin{bmatrix} x_t & (r \\odot h_{t-1}) \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} x_t \\\\ (r \\odot h_{t-1}) \\end{bmatrix} \\right. \\nonumber \\\\\n& & \\left. + x_t W_{xh} + (r_t \\odot h_{t-1}) W_{hh} + b_h \\right) \\\\\nh_t &=& (1 - z_t) \\odot h_{t-1} + z_t \\odot \\tilde{h_t} \\nonumber $$ (Eq. 17) ",
"where $W_{tsr}^{[1:d]} \\in \\mathbb {R}^{(i+d) \\times (i+d) \\times d}$ is a tensor weight for mapping the tensor product between the input-hidden layer, $i$ is the input layer size, and $d$ is the hidden layer size. Alternatively, in this paper we use a simpler bilinear form for calculating $\\tilde{h_t}$ : ",
"$$\\tilde{h_t} &=& f\\left( \\begin{bmatrix} x_t \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} (r_t \\odot h_{t-1}) \\end{bmatrix}^{\\intercal } \\right. \\nonumber \\\\\n& & \\left. + x_t W_{xh} + (r_t \\odot h_{t-1}) W_{hh} + b_h \\right) $$ (Eq. 18) ",
"where $W_{tsr}^{[i:d]} \\in \\mathbb {R}^{i \\times d \\times d}$ is a tensor weight. Each slice $W_{tsr}^{[i]}$ is a matrix $\\mathbb {R}^{i \\times d}$ . The advantage of this asymmetric version is that we can still maintain the interaction between the input and hidden layers through a bilinear form. We reduce the number of parameters from the original neural tensor network formulation by using this asymmetric version. Fig. 6 visualizes the $\\tilde{h_t}$ calculation in more detail."
],
[
"As with GRURNTN, we also applied the tensor product operation for the LSTM unit to improve its performance. In this architecture, the tensor product operation is applied between the current input and the previous hidden layers to calculate the current memory cell. The calculation is parameterized by the tensor weight. We call this architecture a Long Short Term Memory Recurrent Neural Tensor Network (LSTMRNTN). To construct an LSTMRNTN, we defined its formulation: ",
"$$i_t &=& \\sigma (x_t W_{xi} + h_{t-1} W_{hi} + c_{t-1} W_{ci} + b_i) \\nonumber \\\\\nf_t &=& \\sigma (x_t W_{xf} + h_{t-1} W_{hf} + c_{t-1} W_{cf} + b_f) \\nonumber \\\\\n\\tilde{c_t} &=& \\tanh \\left( \\begin{bmatrix} x_t \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} h_{t-1} \\end{bmatrix} \\right. \\nonumber \\\\\n& & \\left. + x_t W_{xc} + h_{t-1} W_{hc} + b_c \\right) \\\\\nc_t &=& f_t \\odot c_{t-1} + i_t \\odot \\tilde{c_t} \\\\\no_t &=& \\sigma (x_t W_{xo} + h_{t-1} W_{ho} + c_t W_{co} + b_o) \\nonumber \\\\\nh_t &=& o_t \\odot \\tanh (c_t) \\nonumber $$ (Eq. 21) ",
"where $W_{tsr}^{[1:d]} \\in R^{i \\times d \\times d}$ is a tensor weight to map the tensor product between current input $x_t$ and previous hidden layer $h_{t-1}$ into our candidate cell $\\tilde{c_t}$ . Each slice $W_{tsr}^{[i]}$ is a matrix $\\mathbb {R}^{i \\times d}$ . Fig. 7 visualizes the $\\tilde{c_t}$ calculation in more detail."
],
[
"In this section, we explain how to train the tensor weight for our proposed architecture. Generally, we use backpropagation to train most neural network models BIBREF21 . For training an RNN, researchers tend to use backpropagation through time (BPTT) where the recurrent operation is unfolded as a feedforward neural network along with the time-step when we backpropagate the error BIBREF22 , BIBREF23 . Sometimes we face a performance issue when we unfold our RNN on such very long sequences. To handle that issue, we can use the truncated BPTT BIBREF5 to limit the number of time-steps when we unfold our RNN during backpropagation.",
"Assume we want to do segment classification BIBREF24 with an RNN trained as function $f : \\mathbf {x} \\rightarrow \\mathbf {y}$ , where $\\mathbf {x} = (x_1,...,x_T)$ as an input sequence and $\\mathbf {y} = (y_1,...,y_T)$ is an output label sequence. In this case, probability output label sequence $y$ , given input sequence $\\mathbf {x}$ , is defined as: ",
"$$P(\\mathbf {y}|\\mathbf {x}) = \\prod _{i=1}^{T}P(y_i | x_1,..,x_i)$$ (Eq. 24) ",
"Usually, we transform likelihood $P(\\mathbf {y}|\\mathbf {x})$ into a negative log-likelihood: ",
"$$E(\\theta ) &=& -\\log P(\\mathbf {y}|\\mathbf {x}) = -\\log \\left(\\prod _{i=1}^{T} P(y_{i}|x_1,..,x_i)\\right) \\\\\n&=& -\\sum _{i=1}^{T} \\log P(y_i | x_1,..,x_i)$$ (Eq. 25) ",
"and our objective is to minimize the negative log-likelihood w.r.t all weight parameters $\\theta $ . To optimize $W_{tsr}^{[1:d]}$ weight parameters, we need to find derivative $E(\\theta )$ w.r.t $W_{tsr}^{[1:d]}$ : ",
"$$\\frac{\\partial E(\\theta )}{\\partial W_{tsr}^{[1:d]}} &=& \\sum _{i=1}^{T} \\frac{\\partial E_i(\\theta )}{\\partial W_{tsr}^{[1:d]}} \\nonumber $$ (Eq. 26) ",
"For applying backpropagation through time, we need to unfold our GRURNTN and backpropagate the error from $E_i(\\theta )$ to all candidate hidden layer $\\tilde{h_j}$ to accumulate $W_{tsr}^{[1..d]}$ gradient where $j \\in [1..i]$ . If we want to use the truncated BPTT to ignore the history past over $K$ time-steps, we can limit $j \\in [max(1, i-K) .. i]$ . We define the standard BPTT on GRURNTN to calculate $\\partial E_i(\\theta ) / \\partial W_{tsr}^{[1..d]}$ : ",
"$$\\frac{\\partial E_i(\\theta )}{\\partial W_{tsr}^{[1:d]}} &=& \\sum _{j=1}^{i} \\frac{\\partial E_i(\\theta )}{\\partial \\tilde{h_j}} \\frac{\\partial \\tilde{h_j}}{\\partial W_{tsr}^{[1:d]}} \\nonumber \\\\\n&=& \\sum _{j=1}^{i} \\frac{\\partial E_i(\\theta )}{\\partial \\tilde{h_j}}\\frac{\\partial \\tilde{h_j}}{\\partial a_j} \\frac{\\partial a_j}{\\partial W_{tsr}^{[1:d]}} \\nonumber \\\\\n&=& \\sum _{j=1}^{i} \\frac{\\partial E_i(\\theta )}{\\partial \\tilde{h_j}} f^{\\prime }(a_j) \\begin{bmatrix} x_j \\end{bmatrix}^{\\intercal } \\begin{bmatrix} (r_j \\odot h_{j-1}) \\end{bmatrix} $$ (Eq. 27) ",
"where ",
"$$ a_j &=& \\left( \\begin{bmatrix} x_j \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} (r_j \\odot h_{j-1}) \\end{bmatrix}^{\\intercal } \\right. \\nonumber \\\\ & & \\left. + x_j W_{xh} + (r_j \\odot h_{j-1}) W_{hh} + b_h \\right) \\nonumber $$ (Eq. 28) ",
"and $f^{\\prime }(\\cdot )$ is a function derivative from our activation function : ",
"$$f^{\\prime }(a_j) =\n{\\left\\lbrace \\begin{array}{ll}\n(1-f(a_j)^2), & \\text{if } f(\\cdot ) \\text{ is $\\tanh $ function} \\\\\nf(a_j)(1-f(a_j)), & \\text{if } f(\\cdot ) \\text{ is sigmoid function}\n\\end{array}\\right.} \\nonumber $$ (Eq. 29) ",
"For LSTMRNTN, we also need to unfold our LSTMRNN and backpropagate the error from $E_i(\\theta )$ to all cell layers $c_j$ to accumulate $W_{tsr}^{[1..d]}$ gradients where $j \\in [1..i]$ . We define the standard BPTT on LSTMRNTN to calculate $\\partial E_i(\\theta ) / \\partial W_{tsr}^{[1..d]}$ : ",
"$$\\frac{\\partial E_i(\\theta )}{\\partial W_{tsr}^{[1:d]}} &=& \\sum _{j=1}^{i} \\frac{\\partial E_i{(\\theta )}}{\\partial c_j} \\frac{\\partial c_j}{\\partial W_{tsr}^{[1:d]}} \\nonumber \\\\\n& = & \\sum _{j=1}^{i} \\frac{\\partial E_i{(\\theta )}}{\\partial c_j} \\frac{\\partial c_j}{\\partial \\tanh (a_j)} \\frac{\\partial \\tanh (a_j)}{\\partial a_j} \\frac{\\partial a_j}{\\partial W_{tsr}^{[1:d]}} \\nonumber \\\\\n& = & \\sum _{j=1}^{i} \\frac{\\partial E_i{(\\theta )}}{\\partial c_j} i_j (1-\\tanh ^2(a_j)) \\begin{bmatrix} x_j \\end{bmatrix}^{\\intercal } \\begin{bmatrix} h_{j-1} \\end{bmatrix} $$ (Eq. 30) ",
"where ",
"$$ a_j &=& \\left(\\begin{bmatrix} x_j \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} h_{j-1} \\end{bmatrix} + x_j W_{xc} + h_{j-1} W_{hc} + b_c \\right) $$ (Eq. 31) ",
". In both proposed models, we can see partial derivative ${\\partial E_i(\\theta )} / {\\partial W_{tsr}^{[1:d]}}$ in Eqs. 27 and 30 , the derivative from the tensor product w.r.t the tensor weight parameters depends on the values of our input and hidden layers. Then all the slices of tensor weight derivative are multiplied by the error from their corresponding pre-activated hidden unit values. From these derivations, we are able to see where each slice of tensor weight is learned more directly from their input and hidden layer values compared by using standard addition operations. After we accumulated every parameter's gradients from all the previous time-steps, we use a stochastic gradient optimization method such as AdaGrad BIBREF25 to optimize our model parameters."
],
[
"Next we evaluate our proposed GRURNTN and LSTMRNTN models against baselines GRURNN and LSTMRNN with two different tasks and datasets."
],
[
"We used a PennTreeBank (PTB) corpus, which is a standard benchmark corpus for statistical language modeling. A PTB corpus is a subset of the WSJ corpus. In this experiment, we followed the standard preprocessing step that was done by previous research BIBREF23 . The PTB dataset is divided as follows: a training set from sections 0-20 with total 930.000 words, a validation set from sections 21-22 with total 74.000 words, and a test set from sections 23-24 with total 82.000 words. The vocabulary is limited to the 10.000 most common words, and all words outside are mapped into a \" $<$ unk $>$ \" token. We used the preprocessed PTB corpus from the RNNLM-toolkit website.",
"We did two different language modeling tasks. First, we experimented on a word-level language model where our RNN predicts the next word probability given the previous words and current word. We used perplexity (PPL) to measure our RNN performance for word-level language modeling. The formula for calculating the PPL of word sequence $X$ is defined by: ",
"$$PPL = 2^{-\\frac{1}{N}\\sum _{i=1}^{N} \\log _2 P(X_i|X_{1..{i-1}})}$$ (Eq. 35) ",
"Second, we experimented on a character-level language model where our RNN predicts the next character probability given the previous characters and current character. We used the average number of bits-per-character (BPC) to measure our RNN performance for character-level language modeling. The formula for calculating the BPC of character sequence $X$ is defined by: ",
"$$BPC = -\\frac{1}{N}\\left(\\sum _{i=1}^{N}\\log _2{p(X_i|X_{1..{i-1}})} \\right)$$ (Eq. 36) "
],
[
"In this experiment, we compared the performance from our baseline models GRURNN and LSTMRNN with our proposed GRURNTN and LSTMRNTN models. We used the same dimensions for the embedding matrix to represent the words and characters as the vectors of real numbers.",
"For the word-level language modeling task, we used 256 hidden units for GRURNTN and LSTMRNTN, 860 for GRURNN, and 740 for LSTMRNN. All of these models use 128 dimensions for word embedding. We used dropout regularization with $p=0.5$ dropout probability for GRURNTN and LSTMRNTN and $p=0.6$ for our baseline model. The total number of free parameters for GRURNN and GRURNTN were about 12 million and about 13 million for LSTMRNN and LSTMRNTN.",
"For the character-level language modeling task, we used 256 hidden units for GRURNTN and LSTMRNTN, 820 for GRURNN, and 600 for LSTMRNTN. All of these models used 32 dimensions for character embedding. We used dropout regularization with $p=0.25$ dropout probability. The total number of free parameters for GRURNN and GRURNTN was about 2.2 million and about 2.6 million for LSTMRNN and LSTMRNTN.",
"We constrained our baseline GRURNN to have a similar number of parameters as the GRURNTN model for a fair comparison. We also applied such constraints on our baseline LSTMRNN to LSTMRNTN model.",
"For all the experiment scenarios, we used AdaGrad for our stochastic gradient optimization method with mini-batch training and a batch size of 15 sentences. We multiplied our learning rate with a decay factor of 0.5 when the cost from the development set for current epoch is greater than previous epoch. We also used a rescaling trick on the gradient BIBREF26 when the norm was larger than 5 to avoid the issue of exploding gradients. For initializing the parameters, we used the orthogonal weight initialization trick BIBREF27 on every model."
],
[
"In this section, we report our experiment results on PTB character-level language modeling using our baseline models GRURNN and LSTMRNN as well as our proposed models GRURNTN and LSTMRNTN. Fig. 8 shows performance comparisons from every model based on the validation set's BPC per epoch. In this experiment, GRURNN made faster progress than LSTMRNN, but eventually LSTMRNN converged into a better BPC based on the development set. Our proposed model GRURNTN made faster and quicker progress than LSTMRNTN and converged into a similar BPC in the last epoch. Both proposed models produced lower BPC than our baseline models from the first epoch to the last epoch.",
"Table 1 shows PTB test set BPC among our baseline models, our proposed models and several published results. Our proposed model GRURNTN and LSTMRNTN outperformed both baseline models. GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN. Overall, GRURNTN slightly outperformed LSTMRNTN, and both proposed models outperformed all of the baseline models on the character-level language modeling task."
],
[
"In this section, we report our experiment results on PTB word-level language modeling using our baseline models GRURNN and LSTMRNN and our proposed models GRURNTN and LSTMRNTN. Fig. 9 compares the performance from every models based on the validation set's PPL per epoch. In this experiment, GRURNN made faster progress than LSTMRNN. Our proposed GRURNTN's progress was also better than LSTMRNTN. The best model in this task was GRURNTN, which had a consistently lower PPL than the other models.",
"Table 1 shows the PTB test set PPL among our baseline models, proposed models, and several published results. Both our proposed models outperformed their baseline models. GRURNTN reduced the perplexity from 97.78 to 87.38 (10.4 absolute / 10.63% relative PPL) over the baseline GRURNN and LSTMRNTN reduced the perplexity from 108.26 to 96.97 (11.29 absolute / 10.42% relative PPL) over the baseline LSTMRNN. Overall, LSTMRNTN improved the LSTMRNN model and its performance closely resembles the baseline GRURNN. However, GRURNTN outperformed all the baseline models as well as the other models by a large margin."
],
[
"Representing hidden states with deeper operations was introduced just a few years ago BIBREF11 . In these works, Pascanu et al. BIBREF11 use additional nonlinear layers for representing the transition from input to hidden layers, hidden to hidden layers, and hidden to output layers. They also improved the RNN architecture by a adding shortcut connection in the deep transition by skipping the intermediate layers. Another work from BIBREF33 proposed a new RNN design for a stacked RNN model called Gated Feedback RNN (GFRNN), which adds more connections from all the previous time-step stacked hidden layers into the current hidden layer computations. Despite adding additional transition layers and connection weight from previous hidden layers, all of these models still represent the input and hidden layer relationships by using linear projection, addition and nonlinearity transformation.",
"On the tensor-based models, Irsoy et al. BIBREF34 proposed a simple RNN with a tensor product between the input and hidden layers. Such architecture resembles RecNTN, given a parse tree with a completely unbalanced tree on one side. Another work from BIBREF35 also use tensor products for representing hidden layers on DNN. By splitting the weight matrix into two parallel weight matrices, they calculated two parallel hidden layers and combined the pair of hidden layers using a tensor product. However, since not all of those models use a gating mechanism, the tensor parameters and tensor product operation can not be fully utilized because of the vanishing (or exploding) gradient problem.",
"On the recurrent neural network-based model, Sutskever et al. BIBREF30 proposed multiplicative RNN (mRNN) for character-level language modeling using tensor as the weight parameters. They proposed two different models. The first selected a slice of tensor weight based on the current character input, and the second improved the first model with factorization for constructing a hidden-to-hidden layer weight. However, those models fail to fully utilize the tensor weight with the tensor product. After they selected the weight matrix based on the current input information, they continue to use linear projection, addition, and nonlinearity for interacting between the input and hidden layers.",
"To the best of our knowledge, none of these works combined the gating mechanism and tensor product concepts into a single neural network architecture. In this paper, we built a new RNN by combining gating units and tensor products into a single RNN architecture. We expect that our proposed GRURNTN and LSTMRNTN architecture will improve the RNN performance for modeling temporal and sequential datasets."
],
[
"We presented a new RNN architecture by combining the gating mechanism and tensor product concepts. Our proposed architecture can learn long-term dependencies from temporal and sequential data using gating units as well as more powerful interaction between the current input and previous hidden layers by introducing tensor product operations. From our experiment on the PennTreeBank corpus, our proposed models outperformed the baseline models with a similar number of parameters in character-level language modeling and word-level language modeling tasks. In a character-level language modeling task, GRURNTN obtained 0.06 absolute (4.32% relative) BPC reduction over GRURNN, and LSTMRNTN obtained 0.03 absolute (2.22% relative) BPC reduction over LSTMRNN. In a word-level language modeling task, GRURNTN obtained 10.4 absolute (10.63% relative) PPL reduction over GRURNN, and LSTMRNTN obtained 11.29 absolute (10.42% relative PPL) reduction over LSTMRNN. In the future, we will investigate the possibility of combining our model with other stacked RNNs architecture, such as Gated Feedback RNN (GFRNN). We would also like to explore other possible tensor operations and integrate them with our RNN architecture. By applying these ideas together, we expect to gain further performance improvement. Last, for further investigation we will apply our proposed models to other temporal and sequential tasks, such as speech recognition and video recognition."
],
[
"Part of this research was supported by JSPS KAKENHI Grant Number 26870371."
]
],
"section_name": [
"Introduction",
"Recurrent Neural Network",
"Gated Recurrent Neural Network",
"Recursive Neural Tensor Network",
"Gated Recurrent Unit Recurrent Neural Tensor Network (GRURNTN)",
"LSTM Recurrent Neural Tensor Network (LSTMRNTN)",
"Optimizing Tensor Weight using Backpropagation Through Time",
"Experiment Settings",
"Datasets and Tasks",
"Experiment Models",
"Character-level Language Modeling",
"Word-level Language Modeling",
"Related Work",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"359b24c7c1cb652e5bd2a097c68e587379251427",
"ba185279f31f9a2361ebef3af94875ee67fd5fde"
],
"answer": [
{
"evidence": [
"Table 1 shows PTB test set BPC among our baseline models, our proposed models and several published results. Our proposed model GRURNTN and LSTMRNTN outperformed both baseline models. GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN. Overall, GRURNTN slightly outperformed LSTMRNTN, and both proposed models outperformed all of the baseline models on the character-level language modeling task.",
"Table 1 shows the PTB test set PPL among our baseline models, proposed models, and several published results. Both our proposed models outperformed their baseline models. GRURNTN reduced the perplexity from 97.78 to 87.38 (10.4 absolute / 10.63% relative PPL) over the baseline GRURNN and LSTMRNTN reduced the perplexity from 108.26 to 96.97 (11.29 absolute / 10.42% relative PPL) over the baseline LSTMRNN. Overall, LSTMRNTN improved the LSTMRNN model and its performance closely resembles the baseline GRURNN. However, GRURNTN outperformed all the baseline models as well as the other models by a large margin."
],
"extractive_spans": [
"0.03 absolute / 2.22% relative BPC",
"11.29 absolute / 10.42% relative PPL"
],
"free_form_answer": "",
"highlighted_evidence": [
"GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN.",
"GRURNTN reduced the perplexity from 97.78 to 87.38 (10.4 absolute / 10.63% relative PPL) over the baseline GRURNN and LSTMRNTN reduced the perplexity from 108.26 to 96.97 (11.29 absolute / 10.42% relative PPL) over the baseline LSTMRNN."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We presented a new RNN architecture by combining the gating mechanism and tensor product concepts. Our proposed architecture can learn long-term dependencies from temporal and sequential data using gating units as well as more powerful interaction between the current input and previous hidden layers by introducing tensor product operations. From our experiment on the PennTreeBank corpus, our proposed models outperformed the baseline models with a similar number of parameters in character-level language modeling and word-level language modeling tasks. In a character-level language modeling task, GRURNTN obtained 0.06 absolute (4.32% relative) BPC reduction over GRURNN, and LSTMRNTN obtained 0.03 absolute (2.22% relative) BPC reduction over LSTMRNN. In a word-level language modeling task, GRURNTN obtained 10.4 absolute (10.63% relative) PPL reduction over GRURNN, and LSTMRNTN obtained 11.29 absolute (10.42% relative PPL) reduction over LSTMRNN. In the future, we will investigate the possibility of combining our model with other stacked RNNs architecture, such as Gated Feedback RNN (GFRNN). We would also like to explore other possible tensor operations and integrate them with our RNN architecture. By applying these ideas together, we expect to gain further performance improvement. Last, for further investigation we will apply our proposed models to other temporal and sequential tasks, such as speech recognition and video recognition."
],
"extractive_spans": [
"GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN.",
"From our experiment on the PennTreeBank corpus, our proposed models outperformed the baseline models with a similar number of parameters in character-level language modeling and word-level language modeling tasks. In a character-level language modeling task, GRURNTN obtained 0.06 absolute (4.32% relative) BPC reduction over GRURNN, and LSTMRNTN obtained 0.03 absolute (2.22% relative) BPC reduction over LSTMRNN. In a word-level language modeling task, GRURNTN obtained 10.4 absolute (10.63% relative) PPL reduction over GRURNN, and LSTMRNTN obtained 11.29 absolute (10.42% relative PPL) reduction over LSTMRNN."
],
"free_form_answer": "",
"highlighted_evidence": [
"From our experiment on the PennTreeBank corpus, our proposed models outperformed the baseline models with a similar number of parameters in character-level language modeling and word-level language modeling tasks. In a character-level language modeling task, GRURNTN obtained 0.06 absolute (4.32% relative) BPC reduction over GRURNN, and LSTMRNTN obtained 0.03 absolute (2.22% relative) BPC reduction over LSTMRNN. In a word-level language modeling task, GRURNTN obtained 10.4 absolute (10.63% relative) PPL reduction over GRURNN, and LSTMRNTN obtained 11.29 absolute (10.42% relative PPL) reduction over LSTMRNN. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"7a61dc43e2b034078b8a1d063e78423f55080a8e",
"85e018a2f8eda0574d99118db2b3b70c26d26750"
],
"answer": [
{
"evidence": [
"Previously in Sections \"Experiment Settings\" and \"Recursive Neural Tensor Network\" , we discussed that the gating mechanism concept can helps RNNs learn long-term dependencies from sequential input data and that adding more powerful interaction between the input and hidden layers simultaneously with the tensor product operation in a bilinear form improves neural network performance and expressiveness. By using tensor product, we increase our model expressiveness by using second-degree polynomial interactions, compared to first-degree polynomial interactions on standard dot product followed by addition in common RNNs architecture. Therefore, in this paper we proposed a Gated Recurrent Neural Tensor Network (GRURNTN) to combine these two advantages into an RNN architecture. In this architecture, the tensor product operation is applied between the current input and previous hidden layer multiplied by the reset gates for calculating the current candidate hidden layer values. The calculation is parameterized by tensor weight. To construct a GRURNTN, we defined the formulation as:",
"As with GRURNTN, we also applied the tensor product operation for the LSTM unit to improve its performance. In this architecture, the tensor product operation is applied between the current input and the previous hidden layers to calculate the current memory cell. The calculation is parameterized by the tensor weight. We call this architecture a Long Short Term Memory Recurrent Neural Tensor Network (LSTMRNTN). To construct an LSTMRNTN, we defined its formulation:"
],
"extractive_spans": [
"in this paper we proposed a Gated Recurrent Neural Tensor Network (GRURNTN) to combine these two advantages into an RNN architecture. In this architecture, the tensor product operation is applied between the current input and previous hidden layer multiplied by the reset gates for calculating the current candidate hidden layer values.",
"As with GRURNTN, we also applied the tensor product operation for the LSTM unit to improve its performance. In this architecture, the tensor product operation is applied between the current input and the previous hidden layers to calculate the current memory cell. The calculation is parameterized by the tensor weight. We call this architecture a Long Short Term Memory Recurrent Neural Tensor Network (LSTMRNTN). "
],
"free_form_answer": "",
"highlighted_evidence": [
"in this paper we proposed a Gated Recurrent Neural Tensor Network (GRURNTN) to combine these two advantages into an RNN architecture. In this architecture, the tensor product operation is applied between the current input and previous hidden layer multiplied by the reset gates for calculating the current candidate hidden layer values.",
"As with GRURNTN, we also applied the tensor product operation for the LSTM unit to improve its performance. In this architecture, the tensor product operation is applied between the current input and the previous hidden layers to calculate the current memory cell. The calculation is parameterized by the tensor weight. We call this architecture a Long Short Term Memory Recurrent Neural Tensor Network (LSTMRNTN). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"However, standard RecNNs have several limitations, where two vectors only implicitly interact with addition before applying a nonlinear activation function on them BIBREF12 and standard RecNNs are not able to model very long-term dependency on tree structures. Zhu et al. BIBREF20 proposed the gating mechanism into standard RecNN model to solve the latter problem. For the former limitation, the RecNN performance can be improved by adding more interaction between the two input vectors. Therefore, a new architecture called a Recursive Neural Tensor Network (RecNTN) tried to overcome the previous problem by adding interaction between two vectors using a tensor product, which is connected by tensor weight parameters. Each slice of the tensor weight can be used to capture the specific pattern between the left and right child vectors. For RecNTN, value $p_1$ from Eq. 13 and is defined by:",
"$$p_1 &=& f\\left( \\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} + \\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W + b \\right) \\\\ p_2 &=& f\\left( \\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} p_1 \\\\ x_3 \\end{bmatrix} + \\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W + b \\right)$$ (Eq. 15)",
"where $W_{tsr}^{[1:d]} \\in \\mathbb {R}^{2d \\times 2d \\times d}$ is the tensor weight to map the tensor product between two children vectors. Each slice $W_{tsr}^{[i]}$ is a matrix $\\mathbb {R}^{2d \\times 2d}$ . For more details, we visualize the calculation for $p_1$ in Fig. 5 ."
],
"extractive_spans": [
"For the former limitation, the RecNN performance can be improved by adding more interaction between the two input vectors. Therefore, a new architecture called a Recursive Neural Tensor Network (RecNTN) tried to overcome the previous problem by adding interaction between two vectors using a tensor product, which is connected by tensor weight parameters. Each slice of the tensor weight can be used to capture the specific pattern between the left and right child vectors. For RecNTN, value $p_1$ from Eq. 13 and is defined by:\n\n$$p_1 &=& f\\left( \\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} + \\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W + b \\right) \\\\ p_2 &=& f\\left( \\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} p_1 \\\\ x_3 \\end{bmatrix} + \\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W + b \\right)$$ (Eq. 15)\n\nwhere $W_{tsr}^{[1:d]} \\in \\mathbb {R}^{2d \\times 2d \\times d}$ is the tensor weight to map the tensor product between two children vectors. Each slice $W_{tsr}^{[i]}$ is a matrix $\\mathbb {R}^{2d \\times 2d}$ . "
],
"free_form_answer": "",
"highlighted_evidence": [
"For the former limitation, the RecNN performance can be improved by adding more interaction between the two input vectors. Therefore, a new architecture called a Recursive Neural Tensor Network (RecNTN) tried to overcome the previous problem by adding interaction between two vectors using a tensor product, which is connected by tensor weight parameters. Each slice of the tensor weight can be used to capture the specific pattern between the left and right child vectors. For RecNTN, value $p_1$ from Eq. 13 and is defined by:\n\n$$p_1 &=& f\\left( \\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} + \\begin{bmatrix} x_1 & x_2 \\end{bmatrix} W + b \\right) \\\\ p_2 &=& f\\left( \\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W_{tsr}^{[1:d]} \\begin{bmatrix} p_1 \\\\ x_3 \\end{bmatrix} + \\begin{bmatrix} p_1 & x_3 \\end{bmatrix} W + b \\right)$$ (Eq. 15)\n\nwhere $W_{tsr}^{[1:d]} \\in \\mathbb {R}^{2d \\times 2d \\times d}$ is the tensor weight to map the tensor product between two children vectors. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"16371f13148f7cb4c9cf09a3b17b15225c5844ba",
"b4ae4fb3864ab78c077c3c356c4796db4de243df"
],
"answer": [
{
"evidence": [
"In this section, we report our experiment results on PTB character-level language modeling using our baseline models GRURNN and LSTMRNN as well as our proposed models GRURNTN and LSTMRNTN. Fig. 8 shows performance comparisons from every model based on the validation set's BPC per epoch. In this experiment, GRURNN made faster progress than LSTMRNN, but eventually LSTMRNN converged into a better BPC based on the development set. Our proposed model GRURNTN made faster and quicker progress than LSTMRNTN and converged into a similar BPC in the last epoch. Both proposed models produced lower BPC than our baseline models from the first epoch to the last epoch.",
"Table 1 shows PTB test set BPC among our baseline models, our proposed models and several published results. Our proposed model GRURNTN and LSTMRNTN outperformed both baseline models. GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN. Overall, GRURNTN slightly outperformed LSTMRNTN, and both proposed models outperformed all of the baseline models on the character-level language modeling task.",
"In this section, we report our experiment results on PTB word-level language modeling using our baseline models GRURNN and LSTMRNN and our proposed models GRURNTN and LSTMRNTN. Fig. 9 compares the performance from every models based on the validation set's PPL per epoch. In this experiment, GRURNN made faster progress than LSTMRNN. Our proposed GRURNTN's progress was also better than LSTMRNTN. The best model in this task was GRURNTN, which had a consistently lower PPL than the other models.",
"Table 1 shows the PTB test set PPL among our baseline models, proposed models, and several published results. Both our proposed models outperformed their baseline models. GRURNTN reduced the perplexity from 97.78 to 87.38 (10.4 absolute / 10.63% relative PPL) over the baseline GRURNN and LSTMRNTN reduced the perplexity from 108.26 to 96.97 (11.29 absolute / 10.42% relative PPL) over the baseline LSTMRNN. Overall, LSTMRNTN improved the LSTMRNN model and its performance closely resembles the baseline GRURNN. However, GRURNTN outperformed all the baseline models as well as the other models by a large margin.",
"We presented a new RNN architecture by combining the gating mechanism and tensor product concepts. Our proposed architecture can learn long-term dependencies from temporal and sequential data using gating units as well as more powerful interaction between the current input and previous hidden layers by introducing tensor product operations. From our experiment on the PennTreeBank corpus, our proposed models outperformed the baseline models with a similar number of parameters in character-level language modeling and word-level language modeling tasks. In a character-level language modeling task, GRURNTN obtained 0.06 absolute (4.32% relative) BPC reduction over GRURNN, and LSTMRNTN obtained 0.03 absolute (2.22% relative) BPC reduction over LSTMRNN. In a word-level language modeling task, GRURNTN obtained 10.4 absolute (10.63% relative) PPL reduction over GRURNN, and LSTMRNTN obtained 11.29 absolute (10.42% relative PPL) reduction over LSTMRNN. In the future, we will investigate the possibility of combining our model with other stacked RNNs architecture, such as Gated Feedback RNN (GFRNN). We would also like to explore other possible tensor operations and integrate them with our RNN architecture. By applying these ideas together, we expect to gain further performance improvement. Last, for further investigation we will apply our proposed models to other temporal and sequential tasks, such as speech recognition and video recognition."
],
"extractive_spans": [
"we report our experiment results on PTB character-level language modeling using our baseline models GRURNN and LSTMRNN as well as our proposed models GRURNTN and LSTMRNTN. ",
"In this experiment, GRURNN made faster progress than LSTMRNN, but eventually LSTMRNN converged into a better BPC based on the development set. Our proposed model GRURNTN made faster and quicker progress than LSTMRNTN and converged into a similar BPC in the last epoch. Both proposed models produced lower BPC than our baseline models from the first epoch to the last epoch.",
"Our proposed model GRURNTN and LSTMRNTN outperformed both baseline models. GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN. Overall, GRURNTN slightly outperformed LSTMRNTN, and both proposed models outperformed all of the baseline models on the character-level language modeling task.",
"we report our experiment results on PTB word-level language modeling using our baseline models GRURNN and LSTMRNN and our proposed models GRURNTN and LSTMRNTN. ",
"In this experiment, GRURNN made faster progress than LSTMRNN. Our proposed GRURNTN's progress was also better than LSTMRNTN. The best model in this task was GRURNTN, which had a consistently lower PPL than the other models.",
"GRURNTN reduced the perplexity from 97.78 to 87.38 (10.4 absolute / 10.63% relative PPL) over the baseline GRURNN and LSTMRNTN reduced the perplexity from 108.26 to 96.97 (11.29 absolute / 10.42% relative PPL) over the baseline LSTMRNN. Overall, LSTMRNTN improved the LSTMRNN model and its performance closely resembles the baseline GRURNN. However, GRURNTN outperformed all the baseline models as well as the other models by a large margin.",
"In a character-level language modeling task, GRURNTN obtained 0.06 absolute (4.32% relative) BPC reduction over GRURNN, and LSTMRNTN obtained 0.03 absolute (2.22% relative) BPC reduction over LSTMRNN. In a word-level language modeling task, GRURNTN obtained 10.4 absolute (10.63% relative) PPL reduction over GRURNN, and LSTMRNTN obtained 11.29 absolute (10.42% relative PPL) reduction over LSTMRNN. "
],
"free_form_answer": "",
"highlighted_evidence": [
"we report our experiment results on PTB character-level language modeling using our baseline models GRURNN and LSTMRNN as well as our proposed models GRURNTN and LSTMRNTN. ",
"In this experiment, GRURNN made faster progress than LSTMRNN, but eventually LSTMRNN converged into a better BPC based on the development set. Our proposed model GRURNTN made faster and quicker progress than LSTMRNTN and converged into a similar BPC in the last epoch. Both proposed models produced lower BPC than our baseline models from the first epoch to the last epoch.",
"Our proposed model GRURNTN and LSTMRNTN outperformed both baseline models. GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN. Overall, GRURNTN slightly outperformed LSTMRNTN, and both proposed models outperformed all of the baseline models on the character-level language modeling task.",
"In this experiment, GRURNN made faster progress than LSTMRNN. Our proposed GRURNTN's progress was also better than LSTMRNTN. The best model in this task was GRURNTN, which had a consistently lower PPL than the other models.",
"GRURNTN reduced the perplexity from 97.78 to 87.38 (10.4 absolute / 10.63% relative PPL) over the baseline GRURNN and LSTMRNTN reduced the perplexity from 108.26 to 96.97 (11.29 absolute / 10.42% relative PPL) over the baseline LSTMRNN. Overall, LSTMRNTN improved the LSTMRNN model and its performance closely resembles the baseline GRURNN. However, GRURNTN outperformed all the baseline models as well as the other models by a large margin.",
"In a character-level language modeling task, GRURNTN obtained 0.06 absolute (4.32% relative) BPC reduction over GRURNN, and LSTMRNTN obtained 0.03 absolute (2.22% relative) BPC reduction over LSTMRNN. In a word-level language modeling task, GRURNTN obtained 10.4 absolute (10.63% relative) PPL reduction over GRURNN, and LSTMRNTN obtained 11.29 absolute (10.42% relative PPL) reduction over LSTMRNN. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table 1 shows PTB test set BPC among our baseline models, our proposed models and several published results. Our proposed model GRURNTN and LSTMRNTN outperformed both baseline models. GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN. Overall, GRURNTN slightly outperformed LSTMRNTN, and both proposed models outperformed all of the baseline models on the character-level language modeling task.",
"Table 1 shows the PTB test set PPL among our baseline models, proposed models, and several published results. Both our proposed models outperformed their baseline models. GRURNTN reduced the perplexity from 97.78 to 87.38 (10.4 absolute / 10.63% relative PPL) over the baseline GRURNN and LSTMRNTN reduced the perplexity from 108.26 to 96.97 (11.29 absolute / 10.42% relative PPL) over the baseline LSTMRNN. Overall, LSTMRNTN improved the LSTMRNN model and its performance closely resembles the baseline GRURNN. However, GRURNTN outperformed all the baseline models as well as the other models by a large margin."
],
"extractive_spans": [],
"free_form_answer": "GRURNTN, character: 0.06 absolute / 4.32% relative bits-per-character.\nLSTMRNTN, character: 0.03 absolute / 2.22% relative bits-per-character.\nGRURNTN, word: 10.4 absolute / 10.63% relative perplexity.\nLSTMRNTN, word: 11.29 absolute / 10.42% relative perplexity.",
"highlighted_evidence": [
"GRURNTN reduced the BPC from 1.39 to 1.33 (0.06 absolute / 4.32% relative BPC) from the baseline GRURNN, and LSTMRNTN reduced the BPC from 1.37 to 1.34 (0.03 absolute / 2.22% relative BPC) from the baseline LSTMRNN.",
"GRURNTN reduced the perplexity from 97.78 to 87.38 (10.4 absolute / 10.63% relative PPL) over the baseline GRURNN and LSTMRNTN reduced the perplexity from 108.26 to 96.97 (11.29 absolute / 10.42% relative PPL) over the baseline LSTMRNN."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"five",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How significant is the performance compared to LSTM model?",
"How does the introduced model combine the both factors?",
"How much improvement do the introduced model achieve compared to the previous models?"
],
"question_id": [
"77a331d4d909d92fab9552b429adde5379b2ae69",
"516b691ef192f136bb037c12c3c9365ef5a6604c",
"c53b036eff430a9d0449fb50b8d2dc9d2679d9fe"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"recurrent network",
"",
""
],
"topic_background": [
"unfamiliar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 2. Long Short Term Memory Unit.",
"Fig. 1. Recurrent Neural Network",
"Fig. 4. Computation for parent values p1 and p2 was done in a bottom-up fashion. Visible node leaves x1, x2, and x3 are processed based on the given binary tree structure.",
"Fig. 5. Calculating vector p1 from left input x1 and right input x2 based on Eq. 15",
"Fig. 3. Gated Recurrent Unit",
"Fig. 6. Calculating candidate hidden layer h̃t from current input xt and previous hidden layer multiplied by reset gate r · ht1 based on Eq. 18",
"Fig. 7. Calculating candidate cell layer c̃t from current input xt and previous hidden layer ht−1 based on Eq. 19",
"TABLE I PennTreeBank test set BPC",
"Fig. 8. Comparison among GRURNN, GRURNTN, LSTMRNN, and LSTMRNTN bits-per-character (BPC) per epoch on PTB validation set. Note : a lower BPC score is better",
"TABLE II PennTreeBank test set PPL",
"Fig. 9. Comparison among GRURNN, GRURNTN, LSTMRNN and LSTMRNTN perplexity (PPL) per epoch on the PTB validation set. Note : a lower PPL value is better"
],
"file": [
"2-Figure2-1.png",
"2-Figure1-1.png",
"3-Figure4-1.png",
"3-Figure5-1.png",
"3-Figure3-1.png",
"4-Figure6-1.png",
"4-Figure7-1.png",
"6-TableI-1.png",
"6-Figure8-1.png",
"7-TableII-1.png",
"7-Figure9-1.png"
]
} | [
"How much improvement do the introduced model achieve compared to the previous models?"
] | [
[
"1706.02222-Conclusion-0",
"1706.02222-Word-level Language Modeling-0",
"1706.02222-Character-level Language Modeling-1",
"1706.02222-Word-level Language Modeling-1",
"1706.02222-Character-level Language Modeling-0"
]
] | [
"GRURNTN, character: 0.06 absolute / 4.32% relative bits-per-character.\nLSTMRNTN, character: 0.03 absolute / 2.22% relative bits-per-character.\nGRURNTN, word: 10.4 absolute / 10.63% relative perplexity.\nLSTMRNTN, word: 11.29 absolute / 10.42% relative perplexity."
] | 343 |
1804.00982 | 360{\deg} Stance Detection | The proliferation of fake news and filter bubbles makes it increasingly difficult to form an unbiased, balanced opinion towards a topic. To ameliorate this, we propose 360{\deg} Stance Detection, a tool that aggregates news with multiple perspectives on a topic. It presents them on a spectrum ranging from support to opposition, enabling the user to base their opinion on multiple pieces of diverse evidence. | {
"paragraphs": [
[
"The growing epidemic of fake news in the wake of the election cycle for the 45th President of the United States has revealed the danger of staying within our filter bubbles. In light of this development, research in detecting false claims has received renewed interest BIBREF0 . However, identifying and flagging false claims may not be the best solution, as putting a strong image, such as a red flag, next to an article may actually entrench deeply held beliefs BIBREF1 .",
"A better alternative would be to provide additional evidence that will allow a user to evaluate multiple viewpoints and decide with which they agree. To this end, we propose 360° INLINEFORM0 INLINEFORM1 Stance Detection, a tool that provides a wide view of a topic from different perspectives to aid with forming a balanced opinion. Given a topic, the tool aggregates relevant news articles from different sources and leverages recent advances in stance detection to lay them out on a spectrum ranging from support to opposition to the topic.",
"Stance detection is the task of estimating whether the attitude expressed in a text towards a given topic is `in favour', `against', or `neutral'. We collected and annotated a novel dataset, which associates news articles with a stance towards a specified topic. We then trained a state-of-the-art stance detection model BIBREF2 on this dataset.",
"The stance detection model is integrated into the 360° INLINEFORM0 INLINEFORM1 Stance Detection website as a web service. Given a news search query and a topic, the tool retrieves news articles matching the query and analyzes their stance towards the topic. The demo then visualizes the articles as a 2D scatter plot on a spectrum ranging from `against' to `in favour' weighted by the prominence of the news outlet and provides additional links and article excerpts as context.",
"The interface allows the user to obtain an overview of the range of opinion that is exhibited towards a topic of interest by various news outlets. The user can quickly collect evidence by skimming articles that fall on different parts of this opinion spectrum using the provided excerpts or peruse any of the original articles by following the available links."
],
[
"Until recently, stance detection had been mostly studied in debates BIBREF3 , BIBREF4 and student essays BIBREF5 . Lately, research in stance detection focused on Twitter BIBREF6 , BIBREF7 , BIBREF2 , particularly with regard to identifying rumors BIBREF8 , BIBREF9 , BIBREF10 . More recently, claims and headlines in news have been considered for stance detection BIBREF11 , which require recognizing entailment relations between claim and article."
],
[
"The objective of stance detection in our case is to classify the stance of an author's news article towards a given topic as `in favour', `against', or `neutral'. Our setting differs from previous instantiations of stance detection in two ways: a) We focus on excerpts from news articles, which are longer and may be more complex than tweets; and b) we do not aim to classify a news article with regard to its agreement with a claim or headline but with regard to its stance towards a topic."
],
[
"We collect data using the AYLIEN News API, which provides search capabilities for news articles enriched with extracted entities and other metadata. As most extracted entities have a neutral stance or might not be of interest to users, we take steps to compile a curated list of topics, which we detail in the following.",
"We define a topic to include named entities, but also more abstract, controversial keywords such as `gun control' and `abortion'. We compile a diverse list of topics that people are likely to be interested in from several sources: a) We retrieve the top 10 entities with the most mentions in each month from November 2015 to June 2017 and filter out entities that are not locations, persons, or organizations and those that are generally perceived as neutral; b) we manually curate a list of current important political figures; and c) we use DBpedia to retrieve a list of controversial topics. Specifically, we included all of the topics mentioned in the Wikipedia list of controversial issues and converted them to DBpedia resource URIs (e.g. http://en.wikipedia.org/wiki/Abortion INLINEFORM0 http://dbpedia.org/resource/Abortion) in order to facilitate linking between topics and DBpedia metadata. We then used DBpedia types BIBREF12 to filter out all entities of type Place, Person and Organisation. Finally, we ranked the remaining topics based on their number of unique outbound edges within the DBpedia graph as a measure of prominence, and picked the top 300. We show the final composition of topics in Table TABREF8 . For each topic, we retrieve the most relevant articles using the News API from November 2015 to July 2017.",
"For annotation, we need to trade-off the complexity and cost of annotation with the agreement between annotators. Annotating entire news articles places a large cognitive load on the annotator, which leads to fatigue and inaccurate annotations. For this reason, we choose to annotate excerpts from news articles. In internal studies, we found that providing a context window of 2-3 sentences around the mention of the entity together with the headline provides sufficient context to produce a reliable annotation. If the entity is not mentioned explicitly, we provide the first paragraph of the article and the headline as context. We annotate the collected data using CrowdFlower with 3 annotators per example using the interface in Figure FIGREF2 . We retain all examples where at least 2 annotators agree, which amounts to 70.5% of all examples.",
"The final dataset consists of 32,227 pairs of news articles and topics annotated with their stance. In particular, 47.67% examples have been annotated with `neutral', 21.9% with `against', 19.05% with `in favour', and 11.38% with `unrelated`. We use 70% of examples for training, 20% for validation, and 10% for testing according to a stratified split. As we expect to encounter novel and unknown entities in the wild, we ensure that entities do not overlap across splits and that we only test on unseen entities."
],
[
"We train a Bidirectional Encoding model BIBREF2 , which has achieved state-of-the-art results for Twitter stance detection on our dataset. The model encodes the entity using a bidirectional LSTM (BiLSTM), which is then used to initialize a BiLSTM that encodes the article and produces a prediction. To reduce the sequence length, we use the same context window that was presented to annotators for training the LSTM. We use pretrained GloVe embeddings BIBREF13 and tune hyperparameters on a validation set. The best model achieves a test accuracy of INLINEFORM0 and a macro-averaged test F1 score of INLINEFORM1 . It significantly outperforms baselines such as a bag-of-n-grams (accuracy: INLINEFORM2 ; F1: INLINEFORM3 )."
],
[
"The interactive demo interface of 360° INLINEFORM0 INLINEFORM1 Stance Detection, which can be seen in Figure FIGREF9 , takes two inputs: a news search query, which is used to retrieve news articles using News API, and a stance target topic, which is used as the target of the stance detection model. For good results, the stance target should also be included as a keyword in the news search query. Multiple keywords can be provided as the query by connecting them with `AND' or `OR' as in Figure FIGREF9 .",
"When these two inputs are provided, the application retrieves a predefined number of news articles (up to 50) that match the first input, and analyzes their stance towards the target (the second input) using the stance detection model. The stance detection model is exposed as a web service and returns for each article-target entity pair a stance label (i.e. one of `in favour', `against' or `neutral') along with a probability.",
"The demo then visualizes the collected news articles as a 2D scatter plot with each (x,y) coordinate representing a single news article from a particular outlet that matched the user query. The x-axis shows the stance of the article in the range INLINEFORM0 . The y-axis displays the prominence of the news outlet that published the article in the range INLINEFORM1 , measured by its Alexa ranking. A table displays the provided information in a complementary format, listing the news outlets of the articles, the stance labels, confidence scores, and prominence rankings. Excerpts of the articles can be scanned by hovering over the news outlets in the table and the original articles can be read by clicking on the source.",
"360° INLINEFORM0 INLINEFORM1 Stance Detection is particularly useful to gain an overview of complex or controversial topics and to highlight differences in their perception across different outlets. We show visualizations for example queries and three controversial topics in Figure FIGREF14 . By extending the tool to enable retrieval of a larger number of news articles and more fine-grained filtering, we can employ it for general news analysis. For instance, we can highlight the volume and distribution of the stance of news articles from a single news outlet such as CNN towards a specified topic as in Figure FIGREF18 ."
],
[
"We have introduced 360° INLINEFORM0 INLINEFORM1 Stance Detection, a tool that aims to provide evidence and context in order to assist the user with forming a balanced opinion towards a controversial topic. It aggregates news with multiple perspectives on a topic, annotates them with their stance, and visualizes them on a spectrum ranging from support to opposition, allowing the user to skim excerpts of the articles or read the original source. We hope that this tool will demonstrate how NLP can be used to help combat filter bubbles and fake news and to aid users in obtaining evidence on which they can base their opinions."
],
[
"Sebastian Ruder is supported by the Irish Research Council Grant Number EBPPG/2014/30 and Science Foundation Ireland Grant Number SFI/12/RC/2289."
]
],
"section_name": [
"Introduction",
"Related work",
"Task definition",
"Data collection",
"Model",
"360°\\! \\! Stance Detection Demo",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1db14fcf830574e7c657acee6d1c4449b83e68c8",
"e1e80a7bfb11ae14a52b4f8c92cb38341b13d282"
],
"answer": [
{
"evidence": [
"We train a Bidirectional Encoding model BIBREF2 , which has achieved state-of-the-art results for Twitter stance detection on our dataset. The model encodes the entity using a bidirectional LSTM (BiLSTM), which is then used to initialize a BiLSTM that encodes the article and produces a prediction. To reduce the sequence length, we use the same context window that was presented to annotators for training the LSTM. We use pretrained GloVe embeddings BIBREF13 and tune hyperparameters on a validation set. The best model achieves a test accuracy of INLINEFORM0 and a macro-averaged test F1 score of INLINEFORM1 . It significantly outperforms baselines such as a bag-of-n-grams (accuracy: INLINEFORM2 ; F1: INLINEFORM3 )."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The best model achieves a test accuracy of INLINEFORM0 and a macro-averaged test F1 score of INLINEFORM1 . It significantly outperforms baselines such as a bag-of-n-grams (accuracy: INLINEFORM2 ; F1: INLINEFORM3 )."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We train a Bidirectional Encoding model BIBREF2 , which has achieved state-of-the-art results for Twitter stance detection on our dataset. The model encodes the entity using a bidirectional LSTM (BiLSTM), which is then used to initialize a BiLSTM that encodes the article and produces a prediction. To reduce the sequence length, we use the same context window that was presented to annotators for training the LSTM. We use pretrained GloVe embeddings BIBREF13 and tune hyperparameters on a validation set. The best model achieves a test accuracy of INLINEFORM0 and a macro-averaged test F1 score of INLINEFORM1 . It significantly outperforms baselines such as a bag-of-n-grams (accuracy: INLINEFORM2 ; F1: INLINEFORM3 )."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The best model achieves a test accuracy of INLINEFORM0 and a macro-averaged test F1 score of INLINEFORM1 . It significantly outperforms baselines such as a bag-of-n-grams (accuracy: INLINEFORM2 ; F1: INLINEFORM3 )."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2913c9027adae4fb2561a0671251550cc26250e5",
"5c2a576ee1524b52b4367c93a96f18c0bc4e2029"
],
"answer": [
{
"evidence": [
"We train a Bidirectional Encoding model BIBREF2 , which has achieved state-of-the-art results for Twitter stance detection on our dataset. The model encodes the entity using a bidirectional LSTM (BiLSTM), which is then used to initialize a BiLSTM that encodes the article and produces a prediction. To reduce the sequence length, we use the same context window that was presented to annotators for training the LSTM. We use pretrained GloVe embeddings BIBREF13 and tune hyperparameters on a validation set. The best model achieves a test accuracy of INLINEFORM0 and a macro-averaged test F1 score of INLINEFORM1 . It significantly outperforms baselines such as a bag-of-n-grams (accuracy: INLINEFORM2 ; F1: INLINEFORM3 )."
],
"extractive_spans": [
"bidirectional LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"The model encodes the entity using a bidirectional LSTM (BiLSTM), which is then used to initialize a BiLSTM that encodes the article and produces a prediction."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We train a Bidirectional Encoding model BIBREF2 , which has achieved state-of-the-art results for Twitter stance detection on our dataset. The model encodes the entity using a bidirectional LSTM (BiLSTM), which is then used to initialize a BiLSTM that encodes the article and produces a prediction. To reduce the sequence length, we use the same context window that was presented to annotators for training the LSTM. We use pretrained GloVe embeddings BIBREF13 and tune hyperparameters on a validation set. The best model achieves a test accuracy of INLINEFORM0 and a macro-averaged test F1 score of INLINEFORM1 . It significantly outperforms baselines such as a bag-of-n-grams (accuracy: INLINEFORM2 ; F1: INLINEFORM3 )."
],
"extractive_spans": [
"a Bidirectional Encoding model BIBREF2"
],
"free_form_answer": "",
"highlighted_evidence": [
"We train a Bidirectional Encoding model BIBREF2 , which has achieved state-of-the-art results for Twitter stance detection on our dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"de5f3c82e16dfbd37d28ddecbe85052d66a35fc1",
"eaf241cd6c69903465b3192daf1929f97de1780a"
],
"answer": [
{
"evidence": [
"We collect data using the AYLIEN News API, which provides search capabilities for news articles enriched with extracted entities and other metadata. As most extracted entities have a neutral stance or might not be of interest to users, we take steps to compile a curated list of topics, which we detail in the following.",
"The final dataset consists of 32,227 pairs of news articles and topics annotated with their stance. In particular, 47.67% examples have been annotated with `neutral', 21.9% with `against', 19.05% with `in favour', and 11.38% with `unrelated`. We use 70% of examples for training, 20% for validation, and 10% for testing according to a stratified split. As we expect to encounter novel and unknown entities in the wild, we ensure that entities do not overlap across splits and that we only test on unseen entities."
],
"extractive_spans": [],
"free_form_answer": "They collect data using the AYLIEN News API, which provides search capabilities for news articles enriched with extracted entities and other metadata and take a step to compile a curated list of topics. The final dataset consists of 32,227 pairs of news articles and topics annotated with their stance. ",
"highlighted_evidence": [
"We collect data using the AYLIEN News API, which provides search capabilities for news articles enriched with extracted entities and other metadata. As most extracted entities have a neutral stance or might not be of interest to users, we take steps to compile a curated list of topics, which we detail in the following.",
"The final dataset consists of 32,227 pairs of news articles and topics annotated with their stance. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The final dataset consists of 32,227 pairs of news articles and topics annotated with their stance. In particular, 47.67% examples have been annotated with `neutral', 21.9% with `against', 19.05% with `in favour', and 11.38% with `unrelated`. We use 70% of examples for training, 20% for validation, and 10% for testing according to a stratified split. As we expect to encounter novel and unknown entities in the wild, we ensure that entities do not overlap across splits and that we only test on unseen entities."
],
"extractive_spans": [
"dataset consists of 32,227 pairs of news articles and topics annotated with their stance"
],
"free_form_answer": "",
"highlighted_evidence": [
"The final dataset consists of 32,227 pairs of news articles and topics annotated with their stance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"do they compare their system with other systems?",
"what is the architecture of their model?",
"what dataset did they use for this tool?"
],
"question_id": [
"5da9e2eef741bd7efccec8e441b8e52e906b2d2d",
"77bc886478925c8e9fb369b1ba5d05c42b3cd79a",
"f15bc40960bd3f81bc791f43ab5c94c52378692d"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: Interface provided to annotators. Annotation instructions are not shown.",
"Table 1: Types and numbers of retrieved topics.",
"Figure 2: 360° Stance Detection interface. News articles about a query, i.e. ‘Ireland AND brexit’ are visualized based on their stance towards a specified topic, i.e. ‘ireland’ and the prominence of the source. Additional information is provided in a table on the right, which allows to skim article excerpts or follow a link to the source.",
"Figure 3: 360° Stance Detection visualizations for example queries and topics.",
"Figure 4: Visualization distribution of stance towards Donald Trump and number of CNN news articles mentioning Donald Trump from August 2016 to January 2018."
],
"file": [
"2-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png"
]
} | [
"what dataset did they use for this tool?"
] | [
[
"1804.00982-Data collection-3",
"1804.00982-Data collection-0"
]
] | [
"They collect data using the AYLIEN News API, which provides search capabilities for news articles enriched with extracted entities and other metadata and take a step to compile a curated list of topics. The final dataset consists of 32,227 pairs of news articles and topics annotated with their stance. "
] | 344 |
1809.06083 | Similarity measure for Public Persons | For the webportal"Who is in the News!"with statistics about the appearence of persons in written news we developed an extension, which measures the relationship of public persons depending on a time parameter, as the relationship may vary over time. On a training corpus of English and German news articles we built a measure by extracting the persons occurrence in the text via pretrained named entity extraction and then construct time series of counts for each person. Pearson correlation over a sliding window is then used to measure the relation of two persons. | {
"paragraphs": [
[
"“Who is in the News!” is a webportal with statistics and plots about the appearence of persons in written news articles. It counts how often public persons are mentioned in news articles and can be used for research or journalistic purposes. The application is indexing articles published by “Reuters” agency on their website . With the interactive charts users can analyze different timespans for the mentiones of public people and look for patterns in the data. The portal is bulit with the Python microframework “Dash\" which uses the plattform “Plotly\" for the interactive charts.",
"Playing around with the charts shows some interresting patterns like the one in the example of Figure FIGREF5 . This figure suggests that there must be some relationship between this two persons. In this example it is obvious because the persons are both german politicians and candidates for the elections.",
"This motivated us to look for suitalbe measures to caputure how persons are related to each other, which then can be used to exted the webportal with charts showing the person to person relationships. Relationship and distance between persons have been analyzed for decades, for example BIBREF0 looked at distance in the famous experimental study “the Small World Problem”. They inspected the graph of relationships between different persons and set the “distance” to the shortest path between them.",
"Other approaches used large text corpora for trying to find connections and relatedness by making statistics over the words in the texts. This of course only works for people appearing in the texts and we will discuss this in section SECREF2 . All these methods do not cover the changes of relations of the persons over time, that may change over the years. Therefore the measure should have a time parameter, which can be set to the desired time we are investigating.",
"We have developed a method for such a measure and tested it on a set of news articles for the United States and Germany. In Figure FIGREF6 you see how the relation changes in an example of the German chancellor ”Angela Merkel” and her opponent on the last elections “Martin Schulz”. It starts around 0 in 2015 and goes up to about 0.75 in 2017 as we can expect looking at the high correlated time series chart in Figure FIGREF5 from the end of 2017."
],
[
"There are several methods which represent words as vectors of numbers and try to group the vectors of similar words together in vector space. Figure FIGREF8 shows a picture which represents such a high dimensional space in 2D via multidimensional scaling BIBREF1 . The implementation was done with Scikit Learn BIBREF2 , BIBREF3 , BIBREF4 . Word vectors are the building blocks for a lot of applications in areas like search, sentiment analysis and recommendation systems.",
"The similarity and therefore the distance between words is calculated via the cosine similarity of the associated vectors, which gives a number between -1 and 1. The word2vec tool was implemented by BIBREF5 , BIBREF6 , BIBREF7 and trained over a Google News dataset with about 100 billion words. They use global matrix factorization or local context window methods for the training of the vectors.",
"A trained dictionary for more than 3 million words and phrases with 300-dim vectors is provided for download. We used the Python library Gensim from BIBREF8 for the calculation of the word distances of the multidimensional scaling in Figure FIGREF8 .",
" BIBREF9 combine the global matrix factorization and local context window methods in the \"GloVe\" method for word representation .",
" BIBREF10 worked on a corpus of newspaper articles and developed a method for unsupervised relation discovery between named entities of different types by looking at the words between each pair of named etities. By measuring the similarity of this context words they can also discover the type of relatoionship. For example a person entity and an organization entity can have the relationship “is member of”. For our application this interesting method can not be used because we need additional time information.",
" BIBREF11 developed models for supervised learning with kernel methods and support vector machines for relation extraction and tested them on problems of person-affiliation and organization-location relations, but also without time parameter."
],
[
"We collected datasets of news articles in English and German language from the news agency Reuters (Table TABREF13 ). After a data cleaning step, which was deleting meta information like author and editor name from the article, title, body and date were stored in a local database and imported to a Pandas data frame BIBREF12 . The English corpus has a dictionary of length 106.848, the German version has a dictionary of length 163.788.",
"For each article we extracted with the Python library “Spacy” the named entities labeled as person. “Spacy” was used because of its good performance BIBREF13 and it has pre-trained language models for English, German and others. The entity recognition is not perfect, so we have errors in the lists of persons. In a post processing step the terms from a list of common errors are removed. The names of the persons appear in different versions like “Donald Trump” or “Trump”. We map all names to the shorter version i.e. “Trump” in this example.",
"In Figure FIGREF15 you can see the time series of the mentions of “Trump” in the news, with a peak at the 8th of November 2016 the day of the election. It is also visible that the general level is changing with the election and is on higher level since then.",
"Taking a look at the histograms of the most frequent persons in some timespan shows the top 20 persons in the English news articles from 2016 to 2018 (Figure FIGREF16 ). As expected the histogram has a distribution that follows Zipfs law BIBREF14 , BIBREF15 .",
"From the corpus data a dictionary is built, where for each person the number of mentions of this person in the news per day is recorded. This time series data can be used to build a model that covers time as parameter for the relationship to other persons."
],
[
"Figure FIGREF18 shows that the mentions of a person and the correlation with the mentions of another person varies over time. We want to capture this in our relation measure. So we take a time window of INLINEFORM0 days and look at the time series in the segment back in time as shown in the example of Figure FIGREF5 .",
"For this vectors of INLINEFORM0 numbers for persons we can use different similarity measures. This choice has of course an impact of the results in applications BIBREF16 . A first choice could be the cosine similarity as used in the word2vec implementations BIBREF5 . We propose a different calculation for our setup, because we want to capture the high correlation of the series even if they are on different absolute levels of the total number of mentions, as in the example of Figure FIGREF19 .",
"We propose to use the Pearson correlation coefficient instead. We can shift the window of calculation over time and therefore get the measure of relatedness as a function of time."
],
[
"Figure FIGREF6 shows a chart of the Pearson correlation coefficient computed over a sliding window of 30 days from 2015-01-01 to 2018-02-26 for the persons “Merkel” and “Schulz”. The measure clearly covers the change in their relationship during this time period. We propose that 30 days is a good value for the time window, because on one hand it is large enough to have sufficient data for the calculation of the correlation, on the other hand it is sensitive enough to reflect changes over time. But the optimal value depends on the application for which the measure is used.",
"An example from the US news corpus shows the time series of “Trump” and “Obama” in Figure FIGREF18 and a zoom in to the first month of 2018 in Figure FIGREF19 . It shows that a high correlation can be on different absolute levels. Therefore we used Pearson correlation to calculate the relation of two persons. You can find examples of the similarities of some test persons from December 2017 in Table TABREF17 ",
"The time series of the correlations looks quite “noisy” as you can see in Figure FIGREF6 , because the series of the mentions has a high variance. To reflect the change of the relation of the persons in a more stable way, you can take a higher value for the size of the calculation window of the correlation between the two series. In the example of Figure FIGREF20 we used a calculation window of 120 days instead of 30 days."
],
[
"It would be interesting to test the ideas with a larger corpus of news articles for example the Google News articles used in the word2vec implementation BIBREF5 .",
"The method can be used for other named entities such as organizations or cities but we expect not as much variation over time periods as with persons. And similarities between different types of entities would we interesting. So as the relation of a person to a city may chance over time."
]
],
"section_name": [
"Motivation",
"Related work",
"Dataset and Data Collection",
"Building the Model",
"Results",
"Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"5c37c6170dbf487c1f3780fcc03c2d120772d3a0",
"f3a13929a3aec10647d9434f61fa97c9c720367a"
],
"answer": [
{
"evidence": [
"We collected datasets of news articles in English and German language from the news agency Reuters (Table TABREF13 ). After a data cleaning step, which was deleting meta information like author and editor name from the article, title, body and date were stored in a local database and imported to a Pandas data frame BIBREF12 . The English corpus has a dictionary of length 106.848, the German version has a dictionary of length 163.788."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We collected datasets of news articles in English and German language from the news agency Reuters (Table TABREF13 ). After a data cleaning step, which was deleting meta information like author and editor name from the article, title, body and date were stored in a local database and imported to a Pandas data frame BIBREF12 . The English corpus has a dictionary of length 106.848, the German version has a dictionary of length 163.788."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We collected datasets of news articles in English and German language from the news agency Reuters (Table TABREF13 ). After a data cleaning step, which was deleting meta information like author and editor name from the article, title, body and date were stored in a local database and imported to a Pandas data frame BIBREF12 . The English corpus has a dictionary of length 106.848, the German version has a dictionary of length 163.788."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We collected datasets of news articles in English and German language from the news agency Reuters (Table TABREF13 ). "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2df2918d90c7c01c06ba5aa338c29356e7b1c245",
"4ce2953c9703ddab8912d78eefa9625a40045988"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"16d886d80bf4e3be14395338c753733cdd810dbf",
"b29afa720417d9f0130961b6ed8c312121cd8089"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: News articles"
],
"extractive_spans": [],
"free_form_answer": "70287",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: News articles"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collected datasets of news articles in English and German language from the news agency Reuters (Table TABREF13 ). After a data cleaning step, which was deleting meta information like author and editor name from the article, title, body and date were stored in a local database and imported to a Pandas data frame BIBREF12 . The English corpus has a dictionary of length 106.848, the German version has a dictionary of length 163.788."
],
"extractive_spans": [
"English corpus has a dictionary of length 106.848",
"German version has a dictionary of length 163.788"
],
"free_form_answer": "",
"highlighted_evidence": [
"The English corpus has a dictionary of length 106.848, the German version has a dictionary of length 163.788."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Did they build a dataset?",
"Do they compare to other methods?",
"How large is the dataset?"
],
"question_id": [
"80d6b9123a10358f57f259b8996a792cac08cb88",
"5181aefb8a7272b4c83a1f7cb61f864ead6a1f1f",
"f010f9aa4ba1b4360a78c00aa0747d7730a61805"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Mentions of Merkel and Schulz in 1/2018",
"Table 1: News articles",
"Figure 2: Correlation for Merkel and Schulz",
"Figure 3: Distances with MDS",
"Figure 4: Mentions of Trump",
"Figure 5: Histogram of mentions in the news",
"Table 2: Similarities of Persons in Dec. 2017",
"Figure 6: Mentions of Trump and Obama",
"Figure 7: Mentions of Trump and Obama in 1/2018",
"Figure 8: Correlation for Trump and Obama"
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"2-Figure2-1.png",
"2-Figure3-1.png",
"3-Figure4-1.png",
"3-Figure5-1.png",
"4-Table2-1.png",
"4-Figure6-1.png",
"4-Figure7-1.png",
"4-Figure8-1.png"
]
} | [
"How large is the dataset?"
] | [
[
"1809.06083-Dataset and Data Collection-0",
"1809.06083-2-Table1-1.png"
]
] | [
"70287"
] | 345 |
1709.04005 | Addressee and Response Selection in Multi-Party Conversations with Speaker Interaction RNNs | In this paper, we study the problem of addressee and response selection in multi-party conversations. Understanding multi-party conversations is challenging because of complex speaker interactions: multiple speakers exchange messages with each other, playing different roles (sender, addressee, observer), and these roles vary across turns. To tackle this challenge, we propose the Speaker Interaction Recurrent Neural Network (SI-RNN). Whereas the previous state-of-the-art system updated speaker embeddings only for the sender, SI-RNN uses a novel dialog encoder to update speaker embeddings in a role-sensitive way. Additionally, unlike the previous work that selected the addressee and response separately, SI-RNN selects them jointly by viewing the task as a sequence prediction problem. Experimental results show that SI-RNN significantly improves the accuracy of addressee and response selection, particularly in complex conversations with many speakers and responses to distant messages many turns in the past. | {
"paragraphs": [
[
"Real-world conversations often involve more than two speakers. In the Ubuntu Internet Relay Chat channel (IRC), for example, one user can initiate a discussion about an Ubuntu-related technical issue, and many other users can work together to solve the problem. Dialogs can have complex speaker interactions: at each turn, users play one of three roles (sender, addressee, observer), and those roles vary across turns.",
"In this paper, we study the problem of addressee and response selection in multi-party conversations: given a responding speaker and a dialog context, the task is to select an addressee and a response from a set of candidates for the responding speaker. The task requires modeling multi-party conversations and can be directly used to build retrieval-based dialog systems BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .",
"The previous state-of-the-art Dynamic-RNN model from BIBREF4 ouchi-tsuboi:2016:EMNLP2016 maintains speaker embeddings to track each speaker status, which dynamically changes across time steps. It then produces the context embedding from the speaker embeddings and selects the addressee and response based on embedding similarity. However, this model updates only the sender embedding, not the embeddings of the addressee or observers, with the corresponding utterance, and it selects the addressee and response separately. In this way, it only models who says what and fails to capture addressee information. Experimental results show that the separate selection process often produces inconsistent addressee-response pairs.",
"To solve these issues, we introduce the Speaker Interaction Recurrent Neural Network (SI-RNN). SI-RNN redesigns the dialog encoder by updating speaker embeddings in a role-sensitive way. Speaker embeddings are updated in different GRU-based units depending on their roles (sender, addressee, observer). Furthermore, we note that the addressee and response are mutually dependent and view the task as a joint prediction problem. Therefore, SI-RNN models the conditional probability (of addressee given the response and vice versa) and selects the addressee and response pair by maximizing the joint probability.",
"On a public standard benchmark data set, SI-RNN significantly improves the addressee and response selection accuracy, particularly in complex conversations with many speakers and responses to distant messages many turns in the past. Our code and data set are available online."
],
[
"We follow a data-driven approach to dialog systems. BIBREF5 singh1999reinforcement, BIBREF6 henderson2008hybrid, and BIBREF7 young2013pomdp optimize the dialog policy using Reinforcement Learning or the Partially Observable Markov Decision Process framework. In addition, BIBREF8 henderson2014second propose to use a predefined ontology as a logical representation for the information exchanged in the conversation. The dialog system can be divided into different modules, such as Natural Language Understanding BIBREF9 , BIBREF10 , Dialog State Tracking BIBREF11 , BIBREF12 , and Natural Language Generation BIBREF13 . Furthermore, BIBREF14 wen2016network and BIBREF15 bordes2017learning propose end-to-end trainable goal-oriented dialog systems.",
"Recently, short text conversation has been popular. The system receives a short dialog context and generates a response using statistical machine translation or sequence-to-sequence networks BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . In contrast to response generation, the retrieval-based approach uses a ranking model to select the highest scoring response from candidates BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, these models are single-turn responding machines and thus still are limited to short contexts with only two speakers. As for larger context, BIBREF22 lowe2015ubuntu propose the Next Utterance Classification (NUC) task for multi-turn two-party dialogs. BIBREF4 ouchi-tsuboi:2016:EMNLP2016 extend NUC to multi-party conversations by integrating the addressee detection problem. Since the data is text based, they use only textual information to predict addressees as opposed to relying on acoustic signals or gaze information in multimodal dialog systems BIBREF23 , BIBREF24 .",
"Furthermore, several other papers are recently presented focusing on modeling role-specific information given the dialogue contexts BIBREF25 , BIBREF26 , BIBREF27 . For example, BIBREF25 meng2017towards combine content and temporal information to predict the utterance speaker. By contrast, our SIRNN explicitly utilizes the speaker interaction to maintain speaker embeddings and predicts the addressee and response by joint selection."
],
[
" BIBREF4 ouchi-tsuboi:2016:EMNLP2016 propose the addressee and response selection task for multi-party conversation. Given a responding speaker INLINEFORM0 and a dialog context INLINEFORM1 , the task is to select a response and an addressee. INLINEFORM2 is a list ordered by time step: INLINEFORM3 ",
"where INLINEFORM0 says INLINEFORM1 to INLINEFORM2 at time step INLINEFORM3 , and INLINEFORM4 is the total number of time steps before the response and addressee selection. The set of speakers appearing in INLINEFORM5 is denoted INLINEFORM6 . As for the output, the addressee is selected from INLINEFORM7 , and the response is selected from a set of candidates INLINEFORM8 . Here, INLINEFORM9 contains the ground-truth response and one or more false responses. We provide some examples in Table TABREF30 (Section SECREF6 )."
],
[
"In this section, we briefly review the state-of-the-art Dynamic-RNN model BIBREF4 , which our proposed model is based on. Dynamic-RNN solves the task in two phases: 1) the dialog encoder maintains a set of speaker embeddings to track each speaker status, which dynamically changes with time step INLINEFORM0 ; 2) then Dynamic-RNN produces the context embedding from the speaker embeddings and selects the addressee and response based on embedding similarity among context, speaker, and utterance.",
"Figure FIGREF4 (Left) illustrates the dialog encoder in Dynamic-RNN on an example context. In this example, INLINEFORM0 says INLINEFORM1 to INLINEFORM2 , then INLINEFORM3 says INLINEFORM4 to INLINEFORM5 , and finally INLINEFORM6 says INLINEFORM7 to INLINEFORM8 . The context INLINEFORM9 will be: DISPLAYFORM0 ",
"with the set of speakers INLINEFORM0 .",
"For a speaker INLINEFORM0 , the bold letter INLINEFORM1 denotes its embedding at time step INLINEFORM2 . Speaker embeddings are initialized as zero vectors and updated recurrently as hidden states of GRUs BIBREF28 , BIBREF29 . Specifically, for each time step INLINEFORM3 with the sender INLINEFORM4 and the utterance INLINEFORM5 , the sender embedding INLINEFORM6 is updated recurrently from the utterance: INLINEFORM7 ",
"where INLINEFORM0 is the embedding for utterance INLINEFORM1 . Other speaker embeddings are updated from INLINEFORM2 . The speaker embeddings are updated until time step INLINEFORM3 .",
"To summarize the whole dialog context INLINEFORM0 , the model applies element-wise max pooling over all the speaker embeddings to get the context embedding INLINEFORM1 : DISPLAYFORM0 ",
"The probability of an addressee and a response being the ground truth is calculated based on embedding similarity. To be specific, for addressee selection, the model compares the candidate speaker INLINEFORM0 , the dialog context INLINEFORM1 , and the responding speaker INLINEFORM2 : DISPLAYFORM0 ",
"where INLINEFORM0 is the final speaker embedding for the responding speaker INLINEFORM1 , INLINEFORM2 is the final speaker embedding for the candidate addressee INLINEFORM3 , INLINEFORM4 is the logistic sigmoid function, INLINEFORM5 is the row-wise concatenation operator, and INLINEFORM6 is a learnable parameter. Similarly, for response selection, DISPLAYFORM0 ",
"where INLINEFORM0 is the embedding for the candidate response INLINEFORM1 , and INLINEFORM2 is a learnable parameter.",
"The model is trained end-to-end to minimize a joint cross-entropy loss for the addressee selection and the response selection with equal weights. At test time, the addressee and the response are separately selected to maximize the probability in Eq EQREF12 and Eq EQREF13 ."
],
[
"While Dynamic-RNN can track the speaker status by capturing who says what in multi-party conversation, there are still some issues. First, at each time step, only the sender embedding is updated from the utterance. Therefore, other speakers are blind to what is being said, and the model fails to capture addressee information. Second, while the addressee and response are mutually dependent, Dynamic-RNN selects them independently. Consider a case where the responding speaker is talking to two other speakers in separate conversation threads. The choice of addressee is likely to be either of the two speakers, but the choice is much less ambiguous if the correct response is given, and vice versa. Dynamic-RNN often produces inconsistent addressee-response pairs due to the separate selection. See Table TABREF30 for examples.",
"In contrast to Dynamic-RNN, the dialog encoder in SI-RNN updates embeddings for all the speakers besides the sender at each time step. Speaker embeddings are updated depending on their roles: the update of the sender is different from the addressee, which is different from the observers. Furthermore, the update of a speaker embedding is not only from the utterance, but also from other speakers. These are achieved by designing variations of GRUs for different roles. Finally, SI-RNN selects the addressee and response jointly by maximizing the joint probability.",
"[t] Dialog Encoder in SI-RNN [1] Input INLINEFORM0 : INLINEFORM1 INLINEFORM2 where INLINEFORM3 // Initialize speaker embeddings INLINEFORM4 INLINEFORM5 //Update speaker embeddings INLINEFORM6 // Update sender, addressee, observers INLINEFORM7 INLINEFORM8 INLINEFORM9 // Compute utterance embedding INLINEFORM10 INLINEFORM11 // Update sender embedding INLINEFORM12 // Update addressee embedding INLINEFORM13 // Update observer embeddings INLINEFORM14 INLINEFORM15 // Return final speaker embeddings Output INLINEFORM16 for INLINEFORM17 "
],
[
"To encode an utterance INLINEFORM0 of INLINEFORM1 words, we use a RNN with Gated Recurrent Units BIBREF28 , BIBREF29 : INLINEFORM2 ",
"where INLINEFORM0 is the word embedding for INLINEFORM1 , and INLINEFORM2 is the INLINEFORM3 hidden state. INLINEFORM4 is initialized as a zero vector, and the utterance embedding is the last hidden state, i.e. INLINEFORM5 ."
],
[
"Figure FIGREF4 (Right) shows how SI-RNN encodes the example in Eq EQREF9 . Unlike Dynamic-RNN, SI-RNN updates all speaker embeddings in a role-sensitive manner. For example, at the first time step when INLINEFORM0 says INLINEFORM1 to INLINEFORM2 , Dynamic-RNN only updates INLINEFORM3 using INLINEFORM4 , while other speakers are updated using INLINEFORM5 . In contrast, SI-RNN updates each speaker status with different units: INLINEFORM6 updates the sender embedding INLINEFORM7 from the utterance embedding INLINEFORM8 and the addressee embedding INLINEFORM9 ; INLINEFORM10 updates the addressee embedding INLINEFORM11 from INLINEFORM12 and INLINEFORM13 ; INLINEFORM14 updates the observer embedding INLINEFORM15 from INLINEFORM16 .",
"Algorithm SECREF4 gives a formal definition of the dialog encoder in SI-RNN. The dialog encoder is a function that takes as input a dialog context INLINEFORM0 (lines 1-5) and returns speaker embeddings at the final time step (lines 28-30). Speaker embeddings are initialized as INLINEFORM1 -dimensional zero vectors (lines 6-9). Speaker embeddings are updated by iterating over each line in the context (lines 10-27)."
],
[
"In this subsection, we explain in detail how INLINEFORM0 / INLINEFORM1 / INLINEFORM2 update speaker embeddings according to their roles at each time step (Algorithm SECREF4 lines 19-26).",
"As shown in Figure FIGREF17 , INLINEFORM0 / INLINEFORM1 / INLINEFORM2 are all GRU-based units. INLINEFORM3 updates the sender embedding from the previous sender embedding INLINEFORM4 , the previous addressee embedding INLINEFORM5 , and the utterance embedding INLINEFORM6 : INLINEFORM7 ",
"The update, as illustrated in the upper part of Figure FIGREF17 , is controlled by three gates. The INLINEFORM0 gate controls the previous sender embedding INLINEFORM1 , and INLINEFORM2 controls the previous addressee embedding INLINEFORM3 . Those two gated interactions together produce the sender embedding proposal INLINEFORM4 . Finally, the update gate INLINEFORM5 combines the proposal INLINEFORM6 and the previous sender embedding INLINEFORM7 to update the sender embedding INLINEFORM8 . The computations in INLINEFORM9 (including gates INLINEFORM10 , INLINEFORM11 , INLINEFORM12 , the proposal embedding INLINEFORM13 , and the final updated embedding INLINEFORM14 ) are formulated as: INLINEFORM15 ",
" where INLINEFORM0 INLINEFORM1 are learnable parameters. INLINEFORM2 uses the same formulation with a different set of parameters, as illustrated in the middle of Figure FIGREF17 . In addition, we update the observer embeddings from the utterance. INLINEFORM3 is implemented as the traditional GRU unit in the lower part of Figure FIGREF17 . Note that the parameters in INLINEFORM4 / INLINEFORM5 / INLINEFORM6 are not shared. This allows SI-RNN to learn role-dependent features to control speaker embedding updates. The formulations of INLINEFORM7 and INLINEFORM8 are similar."
],
[
"The dialog encoder takes the dialog context INLINEFORM0 as input and returns speaker embeddings at the final time step, INLINEFORM1 . Recall from Section SECREF7 that Dynamic-RNN produces the context embedding INLINEFORM2 using Eq EQREF11 and then selects the addressee and response separately using Eq EQREF12 and Eq EQREF13 .",
"In contrast, SI-RNN performs addressee and response selection jointly: the response is dependent on the addressee and vice versa. Therefore, we view the task as a sequence prediction process: given the context and responding speaker, we first predict the addressee, and then predict the response given the addressee. (We also use the reversed prediction order as in Eq EQREF21 .)",
"In addition to Eq EQREF12 and Eq EQREF13 , SI-RNN is also trained to model the conditional probability as follows. To predict the addressee, we calculate the probability of the candidate speaker INLINEFORM0 to be the ground-truth given the ground-truth response INLINEFORM1 (available during training time): DISPLAYFORM0 ",
"The key difference from Eq EQREF12 is that Eq EQREF19 is conditioned on the correct response INLINEFORM0 with embedding INLINEFORM1 . Similarly, for response selection, we calculate the probability of a candidate response INLINEFORM2 given the ground-truth addressee INLINEFORM3 : DISPLAYFORM0 ",
"At test time, SI-RNN selects the addressee-response pair from INLINEFORM0 to maximize the joint probability INLINEFORM1 : DISPLAYFORM0 ",
" In Eq EQREF21 , we decompose the joint probability into two terms: the first term selects the response given the context, and then selects the addressee given the context and the selected response; the second term selects the addressee and response in the reversed order."
],
[
"Data Set. We use the Ubuntu Multiparty Conversation Corpus BIBREF4 and summarize the data statistics in Table TABREF24 . same The whole data set (including the Train/Dev/Test split and the false response candidates) is publicly available. The data set is built from the Ubuntu IRC chat room where a number of users discuss Ubuntu-related technical issues. The log is organized as one file per day corresponding to a document INLINEFORM0 . Each document consists of (Time, SenderID, Utterance) lines. If users explicitly mention addressees at the beginning of the utterance, the addresseeID is extracted. Then a sample, namely a unit of input (the dialog context and the current sender) and output (the addressee and response prediction) for the task, is created to predict the ground-truth addressee and response of this line. Note that samples are created only when the addressee is explicitly mentioned for clear, unambiguous ground-truth labels. False response candidates are randomly chosen from all other utterances within the same document. Therefore, distractors are likely from the same sub-conversation or even from the same sender but at different time steps. This makes it harder than BIBREF22 lowe2015ubuntu where distractors are randomly chosen from all documents. If no addressee is explicitly mentioned, the addressee is left blank and the line is marked as a part of the context.",
"Baselines. Apart from Dynamic-RNN, we also include several other baselines. Recent+TF-IDF always selects the most recent speaker (except the responding speaker INLINEFORM0 ) as the addressee and chooses the response to maximize the tf-idf cosine similarity with the context. We improve it by using a slightly different addressee selection heuristic (Direct-Recent+TF-IDF): select the most recent speaker that directly talks to INLINEFORM1 by an explicit addressee mention. We select from the previous 15 utterances, which is the longest context among all the experiments. This works much better when there are multiple concurrent sub-conversations, and INLINEFORM2 responds to a distant message in the context. We also include another GRU-based model Static-RNN from BIBREF4 ouchi-tsuboi:2016:EMNLP2016. Unlike Dynamic-RNN, speaker embeddings in Static-RNN are based on the order of speakers and are fixed. Furthermore, inspired by BIBREF30 zhou16multi and BIBREF19 serban2016building, we implement Static-Hier-RNN, a hierarchical version of Static-RNN. It first builds utterance embeddings from words and then uses high-level RNNs to process utterance embeddings.",
"Implementation Details For a fair comparison, we follow the hyperparameters from BIBREF4 ouchi-tsuboi:2016:EMNLP2016, which are chosen based on the validation data set. We take a maximum of 20 words for each utterance. We use 300-dimensional GloVe word vectors, which are fixed during training. SI-RNN uses 50-dimensional vectors for both speaker embeddings and hidden states. Model parameters are initialized with a uniform distribution between -0.01 and 0.01. We set the mini-batch size to 128. The joint cross-entropy loss function with 0.001 L2 weight decay is minimized by Adam BIBREF31 . The training is stopped early if the validation accuracy is not improved for 5 consecutive epochs. All experiments are performed on a single GTX Titan X GPU. The maximum number of epochs is 30, and most models converge within 10 epochs."
],
[
"For fair and meaningful quantitative comparisons, we follow BIBREF4 ouchi-tsuboi:2016:EMNLP2016's evaluation protocols. SI-RNN improves the overall accuracy on the addressee and response selection task. Two ablation experiments further analyze the contribution of role-sensitive units and joint selection respectively. We then confirm the robustness of SI-RNN with the number of speakers and distant responses. Finally, in a case study we discuss how SI-RNN handles complex conversations by either engaging in a new sub-conversation or responding to a distant message.",
"Overall Result. As shown in Table TABREF23 , SI-RNN significantly improves upon the previous state-of-the-art. In particular, addressee selection (ADR) benefits most, with different number of candidate responses (denoted as RES-CAND): around 12% in RES-CAND INLINEFORM0 and more than 10% in RES-CAND INLINEFORM1 . Response selection (RES) is also improved, suggesting role-sensitive GRUs and joint selection are helpful for response selection as well. The improvement is more obvious with more candidate responses (2% in RES-CAND INLINEFORM2 and 4% in RES-CAND INLINEFORM3 ). These together result in significantly better accuracy on the ADR-RES metric as well.",
"Ablation Study. We show an ablation study in the last rows of Table TABREF23 . First, we share the parameters of INLINEFORM0 / INLINEFORM1 / INLINEFORM2 . The accuracy decreases significantly, indicating that it is crucial to learn role-sensitive units to update speaker embeddings. Second, to examine our joint selection, we fall back to selecting the addressee and response separately, as in Dynamic-RNN. We find that joint selection improves ADR and RES individually, and it is particularly helpful for pair selection ADR-RES.",
"Number of Speakers. Numerous speakers create complex dialogs and increased candidate addressee, thus the task becomes more challenging. In Figure FIGREF27 (Upper), we investigate how ADR accuracy changes with the number of speakers in the context of length 15, corresponding to the rows with T=15 in Table TABREF23 . Recent+TF-IDF always chooses the most recent speaker and the accuracy drops dramatically as the number of speakers increases. Direct-Recent+TF-IDF shows better performance, and Dynamic-RNNis marginally better. SI-RNN is much more robust and remains above 70% accuracy across all bins. The advantage is more obvious for bins with more speakers.",
"Addressing Distance. Addressing distance is the time difference from the responding speaker to the ground-truth addressee. As the histogram in Figure FIGREF27 (Lower) shows, while the majority of responses target the most recent speaker, many responses go back five or more time steps. It is important to note that for those distant responses, Dynamic-RNN sees a clear performance decrease, even worse than Direct-Recent+TF-IDF. In contrast, SI-RNN handles distant responses much more accurately.",
"same",
"Case Study. Examples in Table TABREF30 show how SI-RNN can handle complex multi-party conversations by selecting from 10 candidate responses. In both examples, the responding speakers participate in two or more concurrent sub-conversations with other speakers.",
"Example (a) demonstrates the ability of SI-RNN to engage in a new sub-conversation. The responding speaker “wafflejock\" is originally involved in two sub-conversations: the sub-conversation 1 with “codepython\", and the ubuntu installation issue with “theoletom\". While it is reasonable to address “codepython\" and “theoletom\", the responses from other baselines are not helpful to solve corresponding issues. TF-IDF prefers the response with the “install\" key-word, yet the response is repetitive and not helpful. Dynamic-RNN selects an irrelevant response to “codepython\". SI-RNN chooses to engage in a new sub-conversation by suggesting a solution to “releaf\" about Ubuntu dedicated laptops.",
"Example (b) shows the advantage of SI-RNN in responding to a distant message. The responding speaker “nicomachus\" is actively engaged with “VeryBewitching\" in the sub-conversation 1 and is also loosely involved in the sub-conversation 2: “chingao\" mentions “nicomachus\" in the most recent utterance. SI-RNN remembers the distant sub-conversation 1 and responds to “VeryBewitching\" with a detailed answer. Direct-Recent+TF-IDF selects the ground-truth addressee because “VeryBewitching\" talks to “nicomachus\", but the response is not helpful. Dynamic-RNN is biased to the recent speaker “chingao\", yet the response is not relevant."
],
[
"SI-RNN jointly models who says what to whom by updating speaker embeddings in a role-sensitive way. It provides state-of-the-art addressee and response selection, which can instantly help retrieval-based dialog systems. In the future, we also consider using SI-RNN to extract sub-conversations in the unlabeled conversation corpus and provide a large-scale disentangled multi-party conversation data set."
],
[
"We thank the members of the UMichigan-IBM Sapphire Project and all the reviewers for their helpful feedback. This material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of IBM."
]
],
"section_name": [
"Introduction",
"Related Work",
"Addressee and Response Selection",
"Dynamic-RNN Model",
"Speaker Interaction RNN",
"Utterance Encoder",
"Dialog Encoder",
"Role-Sensitive Update",
"Joint Selection",
"Experimental Setup",
"Results and Discussion",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"2196d6cec1c48411e1140975f684ca8f983934d2",
"ce68608af6920eddd2c7ba8d15d400e9d0573dce"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Data Statistics. “AdrMention Freq” is the frequency of explicit addressee mention."
],
"extractive_spans": [],
"free_form_answer": "26.8",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Data Statistics. “AdrMention Freq” is the frequency of explicit addressee mention."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Data Set. We use the Ubuntu Multiparty Conversation Corpus BIBREF4 and summarize the data statistics in Table TABREF24 . same The whole data set (including the Train/Dev/Test split and the false response candidates) is publicly available. The data set is built from the Ubuntu IRC chat room where a number of users discuss Ubuntu-related technical issues. The log is organized as one file per day corresponding to a document INLINEFORM0 . Each document consists of (Time, SenderID, Utterance) lines. If users explicitly mention addressees at the beginning of the utterance, the addresseeID is extracted. Then a sample, namely a unit of input (the dialog context and the current sender) and output (the addressee and response prediction) for the task, is created to predict the ground-truth addressee and response of this line. Note that samples are created only when the addressee is explicitly mentioned for clear, unambiguous ground-truth labels. False response candidates are randomly chosen from all other utterances within the same document. Therefore, distractors are likely from the same sub-conversation or even from the same sender but at different time steps. This makes it harder than BIBREF22 lowe2015ubuntu where distractors are randomly chosen from all documents. If no addressee is explicitly mentioned, the addressee is left blank and the line is marked as a part of the context.",
"FLOAT SELECTED: Table 3: Data Statistics. “AdrMention Freq” is the frequency of explicit addressee mention."
],
"extractive_spans": [],
"free_form_answer": "26.8",
"highlighted_evidence": [
"We use the Ubuntu Multiparty Conversation Corpus BIBREF4 and summarize the data statistics in Table TABREF24 .",
"FLOAT SELECTED: Table 3: Data Statistics. “AdrMention Freq” is the frequency of explicit addressee mention."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5177337ae44dd46fd396dbb73e88b310b937449c",
"f19e2e2a05f159b6d4fdbb079fcf4d4066dfb5f7"
],
"answer": [
{
"evidence": [
"Overall Result. As shown in Table TABREF23 , SI-RNN significantly improves upon the previous state-of-the-art. In particular, addressee selection (ADR) benefits most, with different number of candidate responses (denoted as RES-CAND): around 12% in RES-CAND INLINEFORM0 and more than 10% in RES-CAND INLINEFORM1 . Response selection (RES) is also improved, suggesting role-sensitive GRUs and joint selection are helpful for response selection as well. The improvement is more obvious with more candidate responses (2% in RES-CAND INLINEFORM2 and 4% in RES-CAND INLINEFORM3 ). These together result in significantly better accuracy on the ADR-RES metric as well.",
"FLOAT SELECTED: Table 2: Addressee and response selection results on the Ubuntu Multiparty Conversation Corpus. Metrics include accuracy of addressee selection (ADR), response selection (RES), and pair selection (ADR-RES). RES-CAND: the number of candidate responses. T : the context length."
],
"extractive_spans": [],
"free_form_answer": "In addressee selection around 12% in RES-CAND = 2 and 10% in RES-CAND = 10, in candidate responses around 2% in RES-CAND = 2 and 4% in RES-CAND = 10",
"highlighted_evidence": [
"As shown in Table TABREF23 , SI-RNN significantly improves upon the previous state-of-the-art. In particular, addressee selection (ADR) benefits most, with different number of candidate responses (denoted as RES-CAND): around 12% in RES-CAND INLINEFORM0 and more than 10% in RES-CAND INLINEFORM1 . Response selection (RES) is also improved, suggesting role-sensitive GRUs and joint selection are helpful for response selection as well. The improvement is more obvious with more candidate responses (2% in RES-CAND INLINEFORM2 and 4% in RES-CAND INLINEFORM3 ). ",
"FLOAT SELECTED: Table 2: Addressee and response selection results on the Ubuntu Multiparty Conversation Corpus. Metrics include accuracy of addressee selection (ADR), response selection (RES), and pair selection (ADR-RES). RES-CAND: the number of candidate responses. T : the context length."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Addressee and response selection results on the Ubuntu Multiparty Conversation Corpus. Metrics include accuracy of addressee selection (ADR), response selection (RES), and pair selection (ADR-RES). RES-CAND: the number of candidate responses. T : the context length."
],
"extractive_spans": [],
"free_form_answer": "The accuracy of addressee selection is improved by 11.025 percent points on average, the accuracy of response selection is improved by 3.09 percent points on average.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Addressee and response selection results on the Ubuntu Multiparty Conversation Corpus. Metrics include accuracy of addressee selection (ADR), response selection (RES), and pair selection (ADR-RES). RES-CAND: the number of candidate responses. T : the context length."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"16decc3c6a4a9c5753617d2abdbc45fcbb7d40b8",
"a34c85fce39f613f18dcf2ee9325c46f65c4f31b"
],
"answer": [
{
"evidence": [
"The previous state-of-the-art Dynamic-RNN model from BIBREF4 ouchi-tsuboi:2016:EMNLP2016 maintains speaker embeddings to track each speaker status, which dynamically changes across time steps. It then produces the context embedding from the speaker embeddings and selects the addressee and response based on embedding similarity. However, this model updates only the sender embedding, not the embeddings of the addressee or observers, with the corresponding utterance, and it selects the addressee and response separately. In this way, it only models who says what and fails to capture addressee information. Experimental results show that the separate selection process often produces inconsistent addressee-response pairs."
],
"extractive_spans": [
"Dynamic-RNN model from BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"The previous state-of-the-art Dynamic-RNN model from BIBREF4 ouchi-tsuboi:2016:EMNLP2016 maintains speaker embeddings to track each speaker status, which dynamically changes across time steps. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The previous state-of-the-art Dynamic-RNN model from BIBREF4 ouchi-tsuboi:2016:EMNLP2016 maintains speaker embeddings to track each speaker status, which dynamically changes across time steps. It then produces the context embedding from the speaker embeddings and selects the addressee and response based on embedding similarity. However, this model updates only the sender embedding, not the embeddings of the addressee or observers, with the corresponding utterance, and it selects the addressee and response separately. In this way, it only models who says what and fails to capture addressee information. Experimental results show that the separate selection process often produces inconsistent addressee-response pairs."
],
"extractive_spans": [
"Dynamic-RNN model"
],
"free_form_answer": "",
"highlighted_evidence": [
"The previous state-of-the-art Dynamic-RNN model from BIBREF4 ouchi-tsuboi:2016:EMNLP2016 maintains speaker embeddings to track each speaker status, which dynamically changes across time steps. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"what is the average number of speakers in the dataset?",
"by how much is accuracy improved?",
"what are the previous state of the art systems?"
],
"question_id": [
"1e582319df1739dcd07ba0ba39e8f70187fba049",
"aaf2445e78348dba66d7208b7430d25364e11e46",
"d98148f65d893101fa9e18aaf549058712485436"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Notations for the task and model.",
"Figure 1: Dialog encoders in DYNAMIC-RNN (Left) and SI-RNN (Right) for an example context at the top. Speaker embeddings are initialized as zero vectors and updated recurrently as hidden states along the time step. In SI-RNN, the same speaker embedding is updated in different units depending on the role (IGRUS for sender, IGRUA for addressee,GRUO for observer).",
"Figure 2: Illustration of IGRUS (upper, blue), IGRUA (middle, green), and GRUO (lower, yellow). Filled circles are speaker embeddings, which are recurrently updated. Unfilled circles are gates. Filled squares are speaker embedding proposals.",
"Table 2: Addressee and response selection results on the Ubuntu Multiparty Conversation Corpus. Metrics include accuracy of addressee selection (ADR), response selection (RES), and pair selection (ADR-RES). RES-CAND: the number of candidate responses. T : the context length.",
"Table 3: Data Statistics. “AdrMention Freq” is the frequency of explicit addressee mention.",
"Figure 3: Effect of the number of speakers in the context (Upper) and the addressee distance (Lower). Left axis: the histogram shows the number of test examples. Right axis: the curves show ADR accuracy on the test set.",
"Table 4: Case Study. denotes the ground-truth. Sub-conversations are coded with different numbers for the purpose of analysis (sub-conversation labels are not available during training or testing)."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"6-Figure3-1.png",
"7-Table4-1.png"
]
} | [
"what is the average number of speakers in the dataset?",
"by how much is accuracy improved?"
] | [
[
"1709.04005-6-Table3-1.png",
"1709.04005-Experimental Setup-0"
],
[
"1709.04005-6-Table2-1.png",
"1709.04005-Results and Discussion-1"
]
] | [
"26.8",
"The accuracy of addressee selection is improved by 11.025 percent points on average, the accuracy of response selection is improved by 3.09 percent points on average."
] | 346 |
1909.02855 | Don't Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction | Human translators routinely have to translate rare inflections of words - due to the Zipfian distribution of words in a language. When translating from Spanish, a good translator would have no problem identifying the proper translation of a statistically rare inflection such as hablaramos. Note the lexeme itself, hablar, is relatively common. In this work, we investigate whether state-of-the-art bilingual lexicon inducers are capable of learning this kind of generalization. We introduce 40 morphologically complete dictionaries in 10 languages and evaluate three of the state-of-the-art models on the task of translation of less frequent morphological forms. We demonstrate that the performance of state-of-the-art models drops considerably when evaluated on infrequent morphological inflections and then show that adding a simple morphological constraint at training time improves the performance, proving that the bilingual lexicon inducers can benefit from better encoding of morphology. | {
"paragraphs": [
[
"Human translators exhibit remarkable generalization capabilities and are able to translate even rare inflections they may have never seen before. Indeed, this skill is necessary for translation since language follows a Zipfian distribution BIBREF0: a large number of the tokens in a translated text will come from rare types, including rare inflections of common lexemes. For instance, a Spanish translator will most certainly know the verb hablar “to speak”, but they will only have seen the less frequent, first-person plural future form hablarámos a few times. Nevertheless, they would have no problem translating the latter. In this paper we ask whether current methods for bilingual lexicon induction (BLI) generalize morphologically as humans do. Generalization to rare and novel words is arguably the main point of BLI as a task—most frequent translation pairs are already contained in digital dictionaries. Modern word embeddings encode character-level knowledge BIBREF1, which should—in principle—enable the models to learn this behaviour; but morphological generalization has never been directly tested.",
"Most existing dictionaries used for BLI evaluation do not account for the full spectrum of linguistic properties of language. Specifically, as we demonstrate in sec:dictionaries, they omit most morphological inflections of even common lexemes. To enable a more thorough evaluation we introduce a new resource: 40 morphologically complete dictionaries for 5 Slavic and 5 Romance languages, which contain the inflectional paradigm of every word they hold. Much like with a human translator, we expect a BLI model to competently translate full paradigms of lexical items. Throughout this work we place our focus on genetically-related language pairs. This not only allows us to cleanly map one morphological inflection onto another, but also provides an upper bound for the performance on the generalization task; if the models are not able to generalize for closely related languages they would most certainly be unable to generalize when translating between unrelated languages.",
"We use our dictionaries to train and evaluate three of the best performing BLI models BIBREF3, BIBREF4, BIBREF5 on all 40 language pairs. To paint a complete picture of the models' generalization ability we propose a new experimental paradigm in which we independently control for four different variables: the word form's frequency, morphology, the lexeme frequency and the lexeme (a total of 480 experiments). Our comprehensive analysis reveals that BLI models can generalize for frequent morphosyntactic categories, even of infrequent lexemes, but fail to generalize for the more rare categories. This yields a more nuanced picture of the known deficiency of word embeddings to underperform on infrequent words BIBREF6. Our findings also contradict the strong empirical claims made elsewhere in the literature BIBREF4, BIBREF2, BIBREF5, BIBREF7, as we observe that performance severely degrades when the evaluation includes rare morphological variants of a word and infrequent lexemes. We picture this general trend in Figure FIGREF2, which also highlights the skew of existing dictionaries towards more frequent words. As our final contribution, we demonstrate that better encoding of morphology is indeed beneficial: enforcing a simple morphological constraint yields consistent performance improvements for all Romance language pairs and many of the Slavic language pairs.z"
],
[
"Frequent word forms can often be found in human-curated dictionaries. Thus, the practical purpose of training a BLI model should be to create translations of new and less common forms, not present in the existing resources. In spite of this, most ground truth lexica used for BLI evaluation contain mainly frequent word forms. Many available resources are restricted to the top 200k most frequent words; this applies to the English–Italian dictionary of BIBREF8, the English–German and English–Finnish dictionaries of BIBREF4, and BIBREF9's English–Spanish resource. The dictionaries of BIBREF10 contain only the top most frequent 10k words for each language. BIBREF11 extracted their Spanish–English and Italian–English lexica from Open Multilingual WordNet BIBREF12, a resource which only yields high frequency, lemma level mappings. Another example is the recent MUSE dataset BIBREF2, which was generated using an “internal translation tool”, and in which the majority of word pairs consist of forms ranked in the top 10k of the vocabularies of their respective languages.",
"Another problem associated with existing resources is `semantic leakage' between train and evaluation sets. As we demonstrate in §SECREF14, it is common for a single lexeme to appear in both train and test dictionary—in the form of different word inflections. This circumstance is undesirable in evaluation settings as it can lead to performance overstatements—a model can `memorize' the corresponding target lemma, which ultimately reduces the translation task to a much easier task of finding the most appropriate inflection. Finally, most of the available BLI resources include English in each language pair and, given how morphologically impoverished English is, those resources are unsuitable for analysis of morphological generalization."
],
[
"To address the shortcomings of the existing evaluation, we built 40 new morphologically complete dictionaries, which contain most of the inflectional paradigm of every word they contain. This enables a more thorough evaluation and makes the task much more challenging than traditional evaluation sets. In contrast to the existing resources our dictionaries consist of many rare forms, some of which are out-of-vocabulary for large-scale word embeddings such as fastText. Notably, this makes them the only resource of this kind that enables evaluating open-vocabulary BLI.",
"We focus on pairs of genetically-related languages for which we can cleanly map one morphological inflection onto another. We selected 5 languages from the Slavic family: Polish, Czech, Russian, Slovak and Ukrainian, and 5 Romance languages: French, Spanish, Italian, Portuguese and Catalan. Table TABREF5 presents an example extract from our resource; every source–target pair is followed by their corresponding lemmata and a shared tag.",
"We generated our dictionaries automatically based on openly available resources: Open Multilingual WordNet BIBREF12 and Extended Open Multilingual WordNet BIBREF13, both of which are collections of lexical databases which group words into sets of synonyms (synsets), and UniMorph BIBREF14—a resource comprised of inflectional word paradigms for 107 languages, extracted from Wiktionary and annotated according to the UniMorph schema BIBREF15. For each language pair $(L1, L2)$ we first generated lemma translation pairs by mapping all $L1$ lemmata to all $L2$ lemmata for each synset that appeared in both $L1$ and $L2$ WordNets. We then filtered out the pairs which contained lemmata not present in UniMorph and generated inflected entries from the remaining pairs: one entry for each tag that appears in the UniMorph paradigms of both lemmata. The sizes of dictionaries vary across different language pairs and so does the POS distribution. In particular, while Slavic dictionaries are dominated by nouns and adjectives, verbs constitute the majority of pairs in Romance dictionaries. We report the sizes of the dictionaries in Table TABREF6. In order to prevent semantic leakage, discussed in §SECREF4, for each language pair we split the initial dictionary into train, development and test splits so that each sub-dictionary has its own, independent set of lemmata. In our split, the train dictionary contains 60% of all lemmata, while the development and test dictionaries each have 20% of the lemmata."
],
[
"In this section we briefly outline important differences between our resource and the MUSE dictionaries BIBREF2 for Portuguese, Italian, Spanish, and French (12 dictionaries in total). We focus on MUSE as it is one of the few openly available resources that covers genetically-related language pairs."
],
[
"The first and most prominent difference lies in the skew towards frequent word forms in MUSE evaluation. While our test dictionaries contain a representative sample of forms in lower frequency bins, the majority of forms present in MUSE are ranked in the top 10k in their respective language vocabularies. This is clearly presented in Figure FIGREF2 for the French–Spanish resource and also holds for the remaining 11 dictionaries."
],
[
"Another difference lies in the morphological diversity of both dictionaries. The average proportion of paradigm covered for lemmata present in MUSE test dictionaries is 53% for nouns, 37% for adjectives and only 3% for verbs. We generally observe that for most lemmata the dictionaries contain only one inflection. In contrast, for our test dictionaries we get 97% coverage for nouns, 98% for adjectives and 67% for verbs. Note that we do not get 100% coverage as we are limited by the compatibility of source language and target language UniMorph resources."
],
[
"Finally, we carefully analyze the magnitude of the train–test paradigm leakage. We found that, on average 20% (299 out of 1500) of source words in MUSE test dictionaries share their lemma with a word in the corresponding train dictionary. E.g. the French–Spanish test set includes the form perdent—a third-person plural present indicative of perdre (to lose) which is present in the train set. Note that the splits we provide for our dictionaries do not suffer from any leakage as we ensure that each dictionary contains the full paradigm of every lemma."
],
[
"The task of bilingual lexicon induction is well established in the community BIBREF16, BIBREF17 and is the current standard choice for evaluation of cross-lingual word embedding models. Given a list of $N$ source language word forms $x_1, \\ldots , x_N$, the goal is to determine the most appropriate translation $t_i$, for each query form $x_i$. In the context of cross-lingual embeddings, this is commonly accomplished by finding a target language word that is most similar to $x_i$ in the shared semantic space, where words' similarity is usually computed using a cosine between their embeddings. The resulting set of $(x_i, t_i)$ pairs is then compared to the gold standard and evaluated using the precision at $k$ (P@$k$) metric, where $k$ is typically set to 1, 5 or 10. Throughout our evaluation we use P@1, which is equivalent to accuracy.",
"In our work, we focus on the supervised and semi-supervised settings in which the goal is to automatically generate a dictionary given only monolingual word embeddings and some initial, seed translations. For our experiments we selected the models of BIBREF3, BIBREF4 and BIBREF5—three of the best performing BLI models, which induce a shared cross-lingual embedding space by learning an orthogonal transformation from one monolingual space to another (model descriptions are given in the supplementary material). In particular, the last two employ a self-learning method in which they alternate between a mapping step and a word alignment (dictionary induction) step in an iterative manner. As we observed the same general trends across all models, in the body of the paper we only report the results for the best performing model of BIBREF5. We present the complete set of results in the supplementary material."
],
[
"We trained and evaluated all models using the Wikipedia fastText embeddings BIBREF19. Following the existing work, for training we only used the most frequent 200k words in both source and target vocabularies. To allow for evaluation on less frequent words, in all our experiments the models search through the whole target embedding matrix at evaluation (not just the top 200k words, as is common in the literature). This makes the task more challenging, but also gives a more accurate picture of performance. To enable evaluation on the unseen word forms we generated a fastText embedding for every out-of-vocabulary (OOV) inflection of every word in WordNet that also appears in UniMorph. We built those embeddings by summing the vectors of all $n$-grams that constitute an OOV form. In the OOV evaluation we append the resulting vectors to the original embedding matrices."
],
[
"We propose a novel quadripartite analysis of the BLI models, in which we independently control for four different variables: (i) word form frequency, (ii) morphology, (iii) lexeme frequency and (iv) lexeme. We provide detailed descriptions for each of those conditions in the following sections. For each condition, we analyzed all 40 language pairs for each of our selected models—a total of 480 experiments. In the body of the paper we only present a small representative subset of our results."
],
[
"For highly inflectional languages, many of the infrequent types are rare forms of otherwise common lexemes and, given the morphological regularity of less frequent forms, a model that generalizes well should be able to translate those capably. Thus, to gain insight into the models' generalization ability we first examine the relation between their performance and the frequency of words in the test set.",
"We split each test dictionary into 9 frequency bins, based on the relative frequencies of words in the original training corpus for the word embeddings (Wikipedia in the case of fastText). More specifically, a pair appears in a frequency bin if its source word belongs to that bin, according to its rank in the respective vocabulary. We also considered unseen words that appear in the test portion of our dictionaries, but do not occur in the training corpus for the embeddings. This is a fair experimental setting since most of those OOV words are associated with known lemmata. Note that it bears a resemblance to the classic Wug Test BIBREF20 in which a child is introduced to a single instance of a fictitious object—`a wug'—and is asked to name two instances of the same object—`wugs'. However, in contrast to the original setup, we are interested in making sure the unseen inflection of a known lexeme is properly translated.",
"Figure FIGREF18 presents the results on the BLI task for four example language pairs: two from the Slavic and two from the Romance language family. The left-hand side of the plots shows the performance for the full dictionaries (with and without OOVs), while the right-hand side demonstrates how the performance changes as the words in the evaluation set become less frequent. The general trend we observe across all language pairs is an acute drop in accuracy for infrequent word forms—e.g. for Catalan–Portuguese the performance falls from 83% for pairs containing only the top 10k most frequent words to 40% for pairs, which contain source words ranked between 200k and 300k."
],
[
"From the results of the previous section, it is not clear whether the models perform badly on inflections of generally infrequent lemmata or whether they fail on infrequent morphosyntactic categories, independently of the lexeme frequency. Indeed, the frequency of different morphosyntactic categories is far from uniform. To shed more light on the underlying cause of the performance drop in sec:freqcontrol, we first analyze the differences in the models' performance as they translate forms belonging to different categories and, next, look at the distribution of these categories across the frequency bins.",
"In Table TABREF26 we present our findings for a representative sample of morphosyntactic categories for one Slavic and one Romance language pair (we present the results for all models and all language pairs in the supplementary material). It illustrates the great variability across different paradigm slots—both in terms of their frequency and the difficulty of their translation.",
"As expected, the performance is best for the slots belonging to the highest frequency bins and forms residing in the rarer slots prove to be more challenging. For example, for French–Spanish the performance on , and is notably lower than that for the remaining categories. For both language pairs, the accuracy for the second-person plural present imperative () is particularly low: 1.5% accuracy for French–Spanish and 11.1% for Polish–Czech in the in-vocabulary setting. Note that it is unsurprising for an imperative form, expressing an order or command, to be infrequent in the Wikipedia corpora (the resource our monolingual embeddings were trained on). The complex distribution of the French across the frequency bins is likely due to syncretism—the paradigm slot shares a form with 2nd person present plural slot, . Our hypothesis is that syncretism may have an effect on the quality of the monolingual embeddings. To our knowledge, the effect of syncretism on embeddings has not yet been systematically investigated."
],
[
"To get an even more complete picture, we inspect how the performance on translating inflections of common lemmata differs to translating forms coming from less frequent paradigms by controlling for the frequency of the lexeme. We separated our dictionaries into two bins based on the relative frequency of the source lexeme. We approximated frequency of the lexemes by using ranks of their most common inflections: our first bin contained lexemes whose most common inflection is ranked in the top 20k forms in its respective vocabulary, while the second bin consisted of lexemes with most common inflection ranked lower than 60k. We present the results for the same morphosyntactic categories as in §SECREF27 on the left side of the graphs in Figure FIGREF30. As anticipated, in the case of less frequent lexemes the performance is generally worse than for frequent ones. However, perhaps more surprisingly, we discover that some morphosyntactic categories prove to be problematic even for the most frequent lexemes. Some examples include the previously mentioned imperative verb form or, for Slavic languages, singular dative nouns ()."
],
[
"We are, in principle, interested in the ability of the models to generalize morphologically. In the preceding sections we focused on the standard BLI evaluation, which given our objective is somewhat unfair to the models—they are additionally punished for not capturing lexical semantics. To gain more direct insight into the models' generalization abilities we develop a novel experiment in which the lexeme is controlled for. At test time, the BLI model is given a set of candidate translations, all of which belong to the same paradigm, and is asked to select the most suitable form. Note that the model only requires morphological knowledge to successfully complete the task—no lexical semantics is required. When mapping between closely related languages this task is particularly straightforward, and especially so in the case of fastText where a single $n$-gram, e.g. the suffix -ing in English as in the noun running, can be highly indicative of the inflectional morphology of the word.",
"We present results on 8 representative language pairs in Table TABREF35 (column Lexeme). We report the accuracy on the in-vocabulary pairs as well as all the pairs in the dictionary, including OOVs. As expected, compared to standard BLI this task is much easier for the models—the performance is generally high. For Slavic languages numbers remain high even in the open-vocabulary setup, which suggests that the models can generalize morphologically. On the other hand, for Romance languages we observe a visible drop in performance. We hypothesize that this difference is due to the large quantities of verbs in Romance dictionaries; in both Slavic and Romance languages verbs have substantial paradigms, often of more than 60 forms, which makes identifying the correct form more difficult. In contrast, most words in our Slavic dictionaries are nouns and adjectives with much smaller paradigms.",
"Following our analysis in sec:parcontrol, we also examine how the performance on this new task differs for less and more frequent paradigms, as well as across different morphosyntactic categories. Here, we exhibit an unexpected result, which we present in the two right-hand side graphs of Figure FIGREF30: the state-of-the-art BLI models do generalize morphologically for frequent slots, but do not generalize for infrequent slots. For instance, for the Polish–Czech pair, the models achieve 100% accuracy on identifying the correct inflection when this inflection is , , or for frequent and, for the first two categories, also the infrequent lexemes; all of which are common morphosyntactic categories (see Table TABREF26). The results from Figure FIGREF30 also demonstrate that the worst performing forms for the French–Spanish language pair are indeed the infrequent verbal inflections."
],
[
"So far, in our evaluation we have focused on pairs of genetically-related languages, which provided an upper bound for morphological generalization in BLI. But our experimental paradigm is not limited to related language pairs. We demonstrate this by experimenting on two example pairs of one Slavic and one Romance language: Polish–Spanish and Spanish–Polish. To construct the dictionaries we followed the procedure discussed in §SECREF2, but matched the tags based only on the features exhibited in both languages (e.g. Polish can be mapped to in Spanish, as Spanish nouns are not declined for case). Note that mapping between morphosyntactic categories of two unrelated languages is a challenging task BIBREF21, but we did our best to address the issues specific to translation between Polish and Spanish. E.g. we ensured that Spanish imperfective/perfective verb forms can only be translated to Polish forms of imperfective/perfective verbs.",
"The results of our experiments are presented in the last two rows of Table TABREF35 and, for Polish–Spanish, also in Figure FIGREF39. As expected, the BLI results on unrelated languages are generally, but not uniformly, worse than those on related language pairs. The accuracy for Spanish–Polish is particularly low, at 28% (for in vocabulary pairs). We see large variation in performance across morphosyntactic categories and more and less frequent lexemes, similar to that observed for related language pairs. In particular, we observe that —the category difficult for Polish–Czech BLI is also among the most challenging for Polish–Spanish. However, one of the highest performing categories for Polish–Czech, , yields much worse accuracy for Polish–Spanish."
],
[
"In our final experiment we demonstrate that improving morphological generalization has the potential to improve BLI results. We show that enforcing a simple, hard morphological constraint at training time can lead to performance improvements at test time—both on the standard BLI task and the controlled for lexeme BLI. We adapt the self-learning models of BIBREF4 and BIBREF5 so that at each iteration they can align two words only if they share the same morphosyntactic category. Note that this limits the training data only to word forms present in UniMorph, as those are the only ones for which we have a gold tag. The results, a subset of which we present in Table TABREF35, show that the constraint, despite its simplicity and being trained on less data, leads to performance improvements for every Romance language pair and many of the Slavic language pairs. We take this as evidence that properly modelling morphology will have a role to play in BLI."
],
[
"We conducted a large-scale evaluation of the generalization ability of the state-of-the-art bilingual lexicon inducers. To enable our analysis we created 40 morphologically complete dictionaries for 5 Slavic and 5 Romance languages and proposed a novel experimental paradigm in which we independently control for four different variables.",
"Our study is the first to examine morphological generalization in BLI and it reveals a nuanced picture of the interplay between performance, the word's frequency and morphology. We observe that the performance degrades when models are evaluated on less common words—even for the infrequent forms of common lexemes. Our results from the controlled for lexeme experiments suggest that models are able to generalize well for more frequent morphosyntactic categories and for part-of-speech with smaller paradigms. However, their ability to generalize decreases as the slots get less frequent and/or the paradigms get larger. Finally, we proposed a simple method to inject morphological knowledge and demonstrated that making models more morphologically aware can lead to general performance improvements."
]
],
"section_name": [
"Introduction",
"Morphological Dictionaries ::: Existing Dictionaries",
"Morphological Dictionaries ::: Our Dictionaries",
"Morphological Dictionaries ::: Comparison with MUSE",
"Morphological Dictionaries ::: Comparison with MUSE ::: Word Frequency",
"Morphological Dictionaries ::: Comparison with MUSE ::: Morphological Diversity",
"Morphological Dictionaries ::: Comparison with MUSE ::: Train–test Paradigm Leakage",
"Bilingual Lexicon Induction",
"Bilingual Lexicon Induction ::: Experimental Setup",
"Morphological Generalization",
"Morphological Generalization ::: Controlling for Word Frequency",
"Morphological Generalization ::: Controlling for Morphology",
"Morphological Generalization ::: Controlling for Lexeme Frequency",
"Morphological Generalization ::: Controlling for Lexeme",
"Morphological Generalization ::: Experiments on an Unrelated Language Pair",
"Morphological Generalization ::: Adding a Morphological Constraint",
"Discussion and Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"172331a689d95f5b07fc09fb785fb7de8cafd212",
"62080f0e9a877713d57daaa9d1f302b8cf15da4d"
],
"answer": [
{
"evidence": [
"We use our dictionaries to train and evaluate three of the best performing BLI models BIBREF3, BIBREF4, BIBREF5 on all 40 language pairs. To paint a complete picture of the models' generalization ability we propose a new experimental paradigm in which we independently control for four different variables: the word form's frequency, morphology, the lexeme frequency and the lexeme (a total of 480 experiments). Our comprehensive analysis reveals that BLI models can generalize for frequent morphosyntactic categories, even of infrequent lexemes, but fail to generalize for the more rare categories. This yields a more nuanced picture of the known deficiency of word embeddings to underperform on infrequent words BIBREF6. Our findings also contradict the strong empirical claims made elsewhere in the literature BIBREF4, BIBREF2, BIBREF5, BIBREF7, as we observe that performance severely degrades when the evaluation includes rare morphological variants of a word and infrequent lexemes. We picture this general trend in Figure FIGREF2, which also highlights the skew of existing dictionaries towards more frequent words. As our final contribution, we demonstrate that better encoding of morphology is indeed beneficial: enforcing a simple morphological constraint yields consistent performance improvements for all Romance language pairs and many of the Slavic language pairs.z"
],
"extractive_spans": [
"BIBREF3, BIBREF4, BIBREF5 "
],
"free_form_answer": "",
"highlighted_evidence": [
"We use our dictionaries to train and evaluate three of the best performing BLI models BIBREF3, BIBREF4, BIBREF5 on all 40 language pairs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use our dictionaries to train and evaluate three of the best performing BLI models BIBREF3, BIBREF4, BIBREF5 on all 40 language pairs. To paint a complete picture of the models' generalization ability we propose a new experimental paradigm in which we independently control for four different variables: the word form's frequency, morphology, the lexeme frequency and the lexeme (a total of 480 experiments). Our comprehensive analysis reveals that BLI models can generalize for frequent morphosyntactic categories, even of infrequent lexemes, but fail to generalize for the more rare categories. This yields a more nuanced picture of the known deficiency of word embeddings to underperform on infrequent words BIBREF6. Our findings also contradict the strong empirical claims made elsewhere in the literature BIBREF4, BIBREF2, BIBREF5, BIBREF7, as we observe that performance severely degrades when the evaluation includes rare morphological variants of a word and infrequent lexemes. We picture this general trend in Figure FIGREF2, which also highlights the skew of existing dictionaries towards more frequent words. As our final contribution, we demonstrate that better encoding of morphology is indeed beneficial: enforcing a simple morphological constraint yields consistent performance improvements for all Romance language pairs and many of the Slavic language pairs.z"
],
"extractive_spans": [
"BIBREF3",
"BIBREF4",
"BIBREF5"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use our dictionaries to train and evaluate three of the best performing BLI models BIBREF3, BIBREF4, BIBREF5 on all 40 language pairs"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"2481c2e7052ed5cfe356c952e446cd0236d114e2",
"59d13d391a004cc0d568960f9b9994d3623aec3d"
],
"answer": [
{
"evidence": [
"In our final experiment we demonstrate that improving morphological generalization has the potential to improve BLI results. We show that enforcing a simple, hard morphological constraint at training time can lead to performance improvements at test time—both on the standard BLI task and the controlled for lexeme BLI. We adapt the self-learning models of BIBREF4 and BIBREF5 so that at each iteration they can align two words only if they share the same morphosyntactic category. Note that this limits the training data only to word forms present in UniMorph, as those are the only ones for which we have a gold tag. The results, a subset of which we present in Table TABREF35, show that the constraint, despite its simplicity and being trained on less data, leads to performance improvements for every Romance language pair and many of the Slavic language pairs. We take this as evidence that properly modelling morphology will have a role to play in BLI."
],
"extractive_spans": [],
"free_form_answer": "Aligned words must share the same morphosyntactic category",
"highlighted_evidence": [
"We adapt the self-learning models of BIBREF4 and BIBREF5 so that at each iteration they can align two words only if they share the same morphosyntactic category."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In our final experiment we demonstrate that improving morphological generalization has the potential to improve BLI results. We show that enforcing a simple, hard morphological constraint at training time can lead to performance improvements at test time—both on the standard BLI task and the controlled for lexeme BLI. We adapt the self-learning models of BIBREF4 and BIBREF5 so that at each iteration they can align two words only if they share the same morphosyntactic category. Note that this limits the training data only to word forms present in UniMorph, as those are the only ones for which we have a gold tag. The results, a subset of which we present in Table TABREF35, show that the constraint, despite its simplicity and being trained on less data, leads to performance improvements for every Romance language pair and many of the Slavic language pairs. We take this as evidence that properly modelling morphology will have a role to play in BLI."
],
"extractive_spans": [
"each iteration they can align two words only if they share the same morphosyntactic category"
],
"free_form_answer": "",
"highlighted_evidence": [
"We show that enforcing a simple, hard morphological constraint at training time can lead to performance improvements at test time—both on the standard BLI task and the controlled for lexeme BLI. We adapt the self-learning models of BIBREF4 and BIBREF5 so that at each iteration they can align two words only if they share the same morphosyntactic category."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"What are the three SOTA models evaluated?",
"What is the morphological constraint added?"
],
"question_id": [
"34e9e54fa79e89ecacac35f97b33ef3ca3a00f85",
"6e63db22a2a34c20ad341eb33f3422f40d0001d3"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"Spanish",
"Spanish"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The relation between the BLI performance and the frequency of source words in the test dictionary. The graph presents results for the model of Ruder et al. (2018) evaluated on both the MUSE dictionary (Conneau et al., 2018) and our morphologically complete dictionary, which contains many rare morphological variants of words. The numbers above the bars correspond to the number of translated source words (a hyphen represents an empty dictionary).",
"Table 1: An example extract from our morphologically complete Polish–Czech dictionary.",
"Table 2: The sizes of our morphologically complete dictionaries for Slavic and Romance language families. We present the sizes for 20 base dictionaries. We further split those to obtain 40 train, development and test dictionaries—one for each mapping direction to ensure the correct source language lemma separation.",
"Figure 2: The relation between performance and the frequency of source words in the test dictionary for four example language pairs on the standard BLI task. The numbers above the bars correspond to the dictionary sizes.",
"Table 3: BLI results for word pairs that have a specific morphosyntactic category (left) and a distribution of those forms across different frequency bins (right).",
"Figure 3: The performance on the standard BLI task (left side of the graphs) and the controlled for lexeme BLI (right side) for words pairs belonging to the most frequent paradigms and the infrequent paradigms. The numbers above the bars are dictionary sizes and the number of out-of-vocabulary forms in each dictionary (bracketed).",
"Table 4: The results on the standard BLI task and BLI controlled for lexeme for the original Ruder et al. (2018)’s model (7) and the same model trained with a morphological constraint (3) (discussed in §4.6).",
"Figure 4: The results of the experiments on a pair of unrelated languages—Polish and Spanish—on the standard BLI task (left side) and the controlled for lexeme BLI (right side) for word pairs belonging to the most frequent and the infrequent paradigms."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Table2-1.png",
"5-Figure2-1.png",
"6-Table3-1.png",
"7-Figure3-1.png",
"8-Table4-1.png",
"8-Figure4-1.png"
]
} | [
"What is the morphological constraint added?"
] | [
[
"1909.02855-Morphological Generalization ::: Adding a Morphological Constraint-0"
]
] | [
"Aligned words must share the same morphosyntactic category"
] | 347 |